WorldWideScience

Sample records for location induces visual

  1. Location selection in the visual domain

    NARCIS (Netherlands)

    van der Lubbe, Robert Henricus Johannes; Woestenburg, Jaap C.

    2000-01-01

    According to A.H.C. Van der Heijden (1992), attentional selection of visual stimuli can be considered as location selection. Depending on the type of task, location selection can be considered to be automatic )e.g., in case of abrupt onsets), directly controlled (e.g., in case of symbolic precues),

  2. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  3. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    Science.gov (United States)

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  4. Visual field tunneling in aviators induced by memory demands.

    Science.gov (United States)

    Williams, L J

    1995-04-01

    Aviators are required rapidly and accurately to process enormous amounts of visual information located foveally and peripherally. The present study, expanding upon an earlier study (Williams, 1988), required young aviators to process within the framework of a single eye fixation a briefly displayed foveally presented memory load while simultaneously trying to identify common peripheral targets presented on the same display at locations up to 4.5 degrees of visual angle from the fixation point. This task, as well as a character classification task (Williams, 1985, 1988), has been shown to be very difficult for nonaviators: It results in a tendency toward tunnel vision. Limited preliminary measurements of peripheral accuracy suggested that aviators might be less susceptible than nonaviators to this visual tunneling. The present study demonstrated moderate susceptibility to cognitively induced tunneling in aviators when the foveal task was sufficiently difficult and reaction time was the principal dependent measure.

  5. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    Science.gov (United States)

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  7. Visualizing conserved gene location across microbe genomes

    Science.gov (United States)

    Shaw, Chris D.

    2009-01-01

    This paper introduces an analysis-based zoomable visualization technique for displaying the location of genes across many related species of microbes. The purpose of this visualizatiuon is to enable a biologist to examine the layout of genes in the organism of interest with respect to the gene organization of related organisms. During the genomic annotation process, the ability to observe gene organization in common with previously annotated genomes can help a biologist better confirm the structure and function of newly analyzed microbe DNA sequences. We have developed a visualization and analysis tool that enables the biologist to observe and examine gene organization among genomes, in the context of the primary sequence of interest. This paper describes the visualization and analysis steps, and presents a case study using a number of Rickettsia genomes.

  8. Dynamic visual noise affects visual short-term memory for surface color, but not spatial location.

    Science.gov (United States)

    Dent, Kevin

    2010-01-01

    In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.

  9. Could visual neglect induce amblyopia?

    Science.gov (United States)

    Bier, J C; Vokaer, M; Fery, P; Garbusinski, J; Van Campenhoudt, G; Blecic, S A; Bartholomé, E J

    2004-12-01

    Oculomotor nerve disease is a common cause of diplopia. When strabismus is present, absence of diplopia has to induce the research of either uncovering of visual fields or monocular suppression, amblyopia or blindness. We describe the case of a 41-year-old woman presenting with right oculomotor paresis and left object-centred visual neglect due to a right fronto-parietal haemorrhage expanding to the right peri-mesencephalic cisterna caused by the rupture of a right middle cerebral artery aneurysm. She never complained of diplopia despite binocular vision and progressive recovery of strabismus, excluding uncovering of visual fields. Since all other causes were excluded in this case, we hypothesise that the absence of diplopia was due to the object-centred visual neglect. Partial internal right oculomotor paresis causes an ocular deviation in abduction; the image being perceived deviated contralaterally to the left. Thus, in our case, the neglect of the left image is equivalent to a right monocular functional blindness. However, bell cancellation test clearly worsened when assessed in left monocular vision confirming that eye patching can worsen attentional visual neglect. In conclusion, our case argues for the possibility of a functional monocular blindness induced by visual neglect. We think that in presence of strabismus, absence of diplopia should induce the search for hemispatial visual neglect when supratentorial lesions are suspected.

  10. Memory for location and visual cues in white-eared hummingbirds Hylocharis leucotis

    Directory of Open Access Journals (Sweden)

    Guillermo PÉREZ, Carlos LARA, José VICCON-PALE, Martha SIGNORET-POILLON

    2011-08-01

    Full Text Available In nature hummingbirds face floral resources whose availability, quality and quantity can vary spatially and temporally. Thus, they must constantly make foraging decisions about which patches, plants and flowers to visit, partly as a function of the nectar reward. The uncertainty of these decisions would possibly be reduced if an individual could remember locations or use visual cues to avoid revisiting recently depleted flowers. In the present study, we carried out field experiments with white-eared hummingbirds Hylocharis leucotis, to evaluate their use of locations or visual cues when foraging on natural flowers Penstemon roseus. We evaluated the use of spatial memory by observing birds while they were foraging between two plants and within a single plant. Our results showed that hummingbirds prefer to use location when foraging in two plants, but they also use visual cues to efficiently locate unvisited rewarded flowers when they feed on a single plant. However, in absence of visual cues, in both experiments birds mainly used the location of previously visited flowers to make subsequent visits. Our data suggest that hummingbirds are capable of learning and employing this flexibility depending on the faced environmental conditions and the information acquired in previous visits [Current Zoology 57 (4: 468–476, 2011].

  11. Preferred retinal location induced by macular occlusion in a target recognition task

    Science.gov (United States)

    Ness, James W.; Zwick, Harry; Molchany, Jerome W.

    1996-04-01

    Laser-induced central retinal damage not only may diminish visual function, but also may diminish afferent input that provides the ocular motor system with the feedback necessary to move the target to the fovea. Local visual field stabilizations have been used to demonstrate that central artificial occlusions in the normal retina suppress visual function. The purpose of this paper is to evaluate the effect of local field stabilizations on the ocular motor system in a contrast sensitivity task. Five subjects who tested normal in a standard clinical eye exam viewed landolt rings at varying visual angles under three artificial scotoma conditions and a no scotoma condition. The scotoma conditions were a 2 degree(s) and 5 degree(s) stabilized central scotoma and a 2 degree(s) stabilized scotoma positioned 1 degree(s) nasal to the fovea. A Dual Purkinje Eye-Tracker (SRI, version 5) was used to provide eye-position data and to stabilize the artificial scotoma on the retina. The data showed a consistent preference for placing the target in the superior retina under the 2 degree(s) and 5 degree(s) conditions with a strong positive correlation between visual angle and deflection of the eye position into the superior retina. These data suggest that loss of visual function from laser-induced foveal damage may be due in part to a disruption in the ocular motor system. Thus, even if some function remains in the damage site ophthalmoscopically, the ocular motor system may organize around a nonfoveal retinal location, behaviorally suppressing foveal input.

  12. Irrelevant Auditory and Visual Events Induce a Visual Attentional Blink

    NARCIS (Netherlands)

    Van der Burg, Erik; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.

    2013-01-01

    In the present study we investigated whether a task-irrelevant distractor can induce a visual attentional blink pattern. Participants were asked to detect only a visual target letter (A, B, or C) and to ignore the preceding auditory, visual, or audiovisual distractor. An attentional blink was

  13. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  14. What does visual suffix interference tell us about spatial location in working memory?

    Science.gov (United States)

    Allen, Richard J; Castellà, Judit; Ueno, Taiji; Hitch, Graham J; Baddeley, Alan D

    2015-01-01

    A visual object can be conceived of as comprising a number of features bound together by their joint spatial location. We investigate the question of whether the spatial location is automatically bound to the features or whether the two are separable, using a previously developed paradigm whereby memory is disrupted by a visual suffix. Participants were shown a sample array of four colored shapes, followed by a postcue indicating the target for recall. On randomly intermixed trials, a to-be-ignored suffix array consisting of two different colored shapes was presented between the sample and the postcue. In a random half of suffix trials, one of the suffix items overlaid the location of the target. If location was automatically encoded, one might expect the colocation of target and suffix to differentially impair performance. We carried out three experiments, cuing for recall by spatial location (Experiment 1), color or shape (Experiment 2), or both randomly intermixed (Experiment 3). All three studies showed clear suffix effects, but the colocation of target and suffix was differentially disruptive only when a spatial cue was used. The results suggest that purely visual shape-color binding can be retained and accessed without requiring information about spatial location, even when task demands encourage the encoding of location, consistent with the idea of an abstract and flexible visual working memory system.

  15. Fragile visual short-term memory is an object-based and location-specific store.

    Science.gov (United States)

    Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F

    2013-08-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.

  16. Tracking Location and Features of Objects within Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Michael Patterson

    2012-10-01

    Full Text Available Four studies examined how color or shape features can be accessed to retrieve the memory of an object's location. In each trial, 6 colored dots (Experiments 1 and 2 or 6 black shapes (Experiments 3 and 4 were displayed in randomly selected locations for 1.5 s. An auditory cue for either the shape or the color to-be-remembered was presented either simultaneously, immediately, or 2 s later. Non-informative cues appeared in some trials to serve as a control condition. After a 4 s delay, 5/6 objects were re-presented, and participants indicated the location of the missing object either by moving the mouse (Experiments 1 and 3, or by typing coordinates using a grid (Experiments 2 and 4. Compared to the control condition, cues presented simultaneously or immediately after stimuli improved location accuracy in all experiments. However, cues presented after 2 s only improved accuracy in Experiment 1. These results suggest that location information may not be addressable within visual working memory using shape features. In Experiment 1, but not Experiments 2–4, cues significantly improved accuracy when they indicated the missing object could be any of the three identical objects. In Experiments 2–4, location accuracy was highly impaired when the missing object came from a group of identical rather than uniquely identifiable objects. This indicates that when items with similar features are presented, location accuracy may be reduced. In summary, both feature type and response mode can influence the accuracy and accessibility of visual working memory for object location.

  17. The location but not the attributes of visual cues are automatically encoded into working memory.

    Science.gov (United States)

    Chen, Hui; Wyble, Brad

    2015-02-01

    Although it has been well known that visual cues affect the perception of subsequent visual stimuli, relatively little is known about how the cues themselves are processed. The present study attempted to characterize the processing of a visual cue by investigating what information about the cue is stored in terms of both location ("where" is the cue) and attributes ("what" are the attributes of the cue). In 11 experiments subjects performed several trials of reporting a target letter and then answered an unexpected question about the cue (e.g., the location, color, or identity of the cue). This surprise question revealed that participants could report the location of the cue even when the cue never indicated the target location and they were explicitly told to ignore it. Furthermore, the memory trace of this location information endured during encoding of the subsequent target. In contrast to location, attributes of the cue (e.g., color) were poorly reported, even for attributes that were used by subjects to perform the task. These results shed new light on the mechanisms underlying cueing effects and suggest also that the visual system may create empty object files in response to visual cues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Octopus vulgaris uses visual information to determine the location of its arm.

    Science.gov (United States)

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Transfer of an induced preferred retinal locus of fixation to everyday life visual tasks.

    Science.gov (United States)

    Barraza-Bernal, Maria J; Rifai, Katharina; Wahl, Siegfried

    2017-12-01

    Subjects develop a preferred retinal locus of fixation (PRL) under simulation of central scotoma. If systematic relocations are applied to the stimulus position, PRLs manifest at a location in favor of the stimulus relocation. The present study investigates whether the induced PRL is transferred to important visual tasks in daily life, namely pursuit eye movements, signage reading, and text reading. Fifteen subjects with normal sight participated in the study. To develop a PRL, all subjects underwent a scotoma simulation in a prior study, where five subjects were trained to develop the PRL in the left hemifield, five different subjects on the right hemifield, and the remaining five subjects could naturally chose the PRL location. The position of this PRL was used as baseline. Under central scotoma simulation, subjects performed a pursuit task, a signage reading task, and a reading-text task. In addition, retention of the behavior was also studied. Results showed that the PRL position was transferred to the pursuit task and that the vertical location of the PRL was maintained on the text reading task. However, when reading signage, a function-driven change in PRL location was observed. In addition, retention of the PRL position was observed over weeks and months. These results indicate that PRL positions can be induced and may further transferred to everyday life visual tasks, without hindering function-driven changes in PRL position.

  20. Joint image restoration and location in visual navigation system

    Science.gov (United States)

    Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie

    2018-02-01

    Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.

  1. Task-irrelevant distractors in the delay period interfere selectively with visual short-term memory for spatial locations.

    Science.gov (United States)

    Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F

    2017-07-01

    Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.

  2. Visualization of femtosecond laser pulse-induced microincisions inside crystalline lens tissue.

    Science.gov (United States)

    Stachs, Oliver; Schumacher, Silvia; Hovakimyan, Marine; Fromm, Michael; Heisterkamp, Alexander; Lubatschowski, Holger; Guthoff, Rudolf

    2009-11-01

    To evaluate a new method for visualizing femtosecond laser pulse-induced microincisions inside crystalline lens tissue. Laser Zentrum Hannover e.V., Hannover, Germany. Lenses removed from porcine eyes were modified ex vivo by femtosecond laser pulses (wavelength 1040 nm, pulse duration 306 femtoseconds, pulse energy 1.0 to 2.5 microJ, repetition rate 100 kHz) to create defined planes at which lens fibers separate. The femtosecond laser pulses were delivered by a 3-dimension (3-D) scanning unit and transmitted by focusing optics (numerical aperture 0.18) into the lens tissue. Lens fiber orientation and femtosecond laser-induced microincisions were examined using a confocal laser scanning microscope (CLSM) based on a Rostock Cornea Module attached to a Heidelberg Retina Tomograph II. Optical sections were analyzed in 3-D using Amira software (version 4.1.1). Normal lens fibers showed a parallel pattern with diameters between 3 microm and 9 microm, depending on scanning location. Microincision visualization showed different cutting effects depending on pulse energy of the femtosecond laser. The effects ranged from altered tissue-scattering properties with all fibers intact to definite fiber separation by a wide gap. Pulse energies that were too high or overlapped too tightly produced an incomplete cutting plane due to extensive microbubble generation. The 3-D CLSM method permitted visualization and analysis of femtosecond laser pulse-induced microincisions inside crystalline lens tissue. Thus, 3-D CLSM may help optimize femtosecond laser-based procedures in the treatment of presbyopia.

  3. Visual sensations induced by Cherenkov radiation

    International Nuclear Information System (INIS)

    McNulty, P.J.; Pease, V.P.; Bond, V.P.

    1975-01-01

    Pulses of relativistic singly charged particles entering the eyeball induce a variety of visual phenomena by means of Cerenkov radiation generated during their passage through the vitreous. These phenomena are similar in appearance to many of the visual sensations experienced by Apollo astronauts exposed to the cosmic rays in deep space

  4. Cortical activation patterns during long-term memory retrieval of visually or haptically encoded objects and locations.

    Science.gov (United States)

    Stock, Oliver; Röder, Brigitte; Burke, Michael; Bien, Siegfried; Rösler, Frank

    2009-01-01

    The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n=10) or haptically (haptic encoding group, n=10) had to be retrieved from long-term memory. Participants learned associations between auditorily presented words and either meaningless objects or locations in a 3-D space. During the retrieval phase one day later, participants had to decide whether two auditorily presented words shared an association with a common object or location. Thus, perceptual stimulation during retrieval was always equivalent, whereas either visually or haptically encoded object or location associations had to be reactivated. Moreover, the number of associations fanning out from each word varied systematically, enabling a parametric increase of the number of reactivated representations. Recall of visual objects predominantly activated the left superior frontal gyrus and the intraparietal cortex, whereas visually learned locations activated the superior parietal cortex of both hemispheres. Retrieval of haptically encoded material activated the left medial frontal gyrus and the intraparietal cortex in the object condition, and the bilateral superior parietal cortex in the location condition. A direct test for modality-specific effects showed that visually encoded material activated more vision-related areas (BA 18/19) and haptically encoded material more motor and somatosensory-related areas. A conjunction analysis identified supramodal and material-unspecific activations within the medial and superior frontal gyrus and the superior parietal lobe including the intraparietal sulcus. These activation patterns strongly support the idea that code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that comprise sensory and motor processing systems.

  5. Location-Unbound Color-Shape Binding Representations in Visual Working Memory.

    Science.gov (United States)

    Saiki, Jun

    2016-02-01

    The mechanism by which nonspatial features, such as color and shape, are bound in visual working memory, and the role of those features' location in their binding, remains unknown. In the current study, I modified a redundancy-gain paradigm to investigate these issues. A set of features was presented in a two-object memory display, followed by a single object probe. Participants judged whether the probe contained any features of the memory display, regardless of its location. Response time distributions revealed feature coactivation only when both features of a single object in the memory display appeared together in the probe, regardless of the response time benefit from the probe and memory objects sharing the same location. This finding suggests that a shared location is necessary in the formation of bound representations but unnecessary in their maintenance. Electroencephalography data showed that amplitude modulations reflecting location-unbound feature coactivation were different from those reflecting the location-sharing benefit, consistent with the behavioral finding that feature-location binding is unnecessary in the maintenance of color-shape binding. © The Author(s) 2015.

  6. Characterizing the nature of visual conscious access: the distinction between features and locations.

    Science.gov (United States)

    Huang, Liqiang

    2010-08-24

    The difference between the roles of features and locations has been a central topic in the theoretical debates on visual attention. A recent theory proposed that momentary visual awareness is limited to one Boolean map, that is the linkage of one feature per dimension with a set of locations (L. Huang & H. Pashler, 2007). This theory predicts that: (a) access to the features of a set of objects is inefficient whereas access to their locations is efficient; (b) shuffling the locations of objects disrupts access to their features whereas shuffling the features of objects has little impact on access to their locations. Both of these predictions were confirmed in Experiments 1 and 2. Experiments 3 and 4 showed that this feature/location distinction remains when the task involves the detection of changes to old objects rather than the coding of new objects. Experiments 5 and 6 showed that, in a pre-specified set, one missing location can be readily detected, but detecting one missing color is difficult. Taken together, multiple locations seem to be accessed and represented together as a holistic pattern, but features have to be handled as separate labels, one at a time, and do not constitute a pattern in featural space.

  7. Location memory biases reveal the challenges of coordinating visual and kinesthetic reference frames

    Science.gov (United States)

    Simmering, Vanessa R.; Peterson, Clayton; Darling, Warren; Spencer, John P.

    2008-01-01

    Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems. PMID:17703284

  8. Asymmetrical access to color and location in visual working memory.

    Science.gov (United States)

    Rajsic, Jason; Wilson, Daryl E

    2014-10-01

    Models of visual working memory (VWM) have benefitted greatly from the use of the delayed-matching paradigm. However, in this task, the ability to recall a probed feature is confounded with the ability to maintain the proper binding between the feature that is to be reported and the feature (typically location) that is used to cue a particular item for report. Given that location is typically used as a cue-feature, we used the delayed-estimation paradigm to compare memory for location to memory for color, rotating which feature was used as a cue and which was reported. Our results revealed several novel findings: 1) the likelihood of reporting a probed object's feature was superior when reporting location with a color cue than when reporting color with a location cue; 2) location report errors were composed entirely of swap errors, with little to no random location reports; and 3) both colour and location reports greatly benefitted from the presence of nonprobed items at test. This last finding suggests that it is uncertainty over the bindings between locations and colors at memory retrieval that drive swap errors, not at encoding. We interpret our findings as consistent with a representational architecture that nests remembered object features within remembered locations.

  9. Effect of drivers' age and push button locations on visual time off road, steering wheel deviation and safety perception.

    Science.gov (United States)

    Dukic, T; Hanson, L; Falkmer, T

    2006-01-15

    The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.

  10. Contextual remapping in visual search after predictable target-location changes.

    Science.gov (United States)

    Conci, Markus; Sun, Luning; Müller, Hermann J

    2011-07-01

    Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.

  11. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    Science.gov (United States)

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Experimentally-induced dissociation impairs visual memory.

    Science.gov (United States)

    Brewin, Chris R; Mersaditabari, Niloufar

    2013-12-01

    Dissociation is a phenomenon common in a number of psychological disorders and has been frequently suggested to impair memory for traumatic events. In this study we explored the effects of dissociation on visual memory. A dissociative state was induced experimentally using a mirror-gazing task and its short-term effects on memory performance were investigated. Sixty healthy individuals took part in the experiment. Induced dissociation impaired visual memory performance relative to a control condition; however, the degree of dissociation was not associated with lower memory scores in the experimental group. The results have theoretical and practical implications for individuals who experience frequent dissociative states such as patients with posttraumatic stress disorder (PTSD). Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Different effects of color-based and location-based selection on visual working memory.

    Science.gov (United States)

    Li, Qi; Saiki, Jun

    2015-02-01

    In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.

  14. Time course influences transfer of visual perceptual learning across spatial location.

    Science.gov (United States)

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Diabetic macular oedema and visual loss: relationship to location, severity and duration

    DEFF Research Database (Denmark)

    Gardner, Thomas W; Larsen, Michael; Girach, Aniz

    2009-01-01

    Abstract. Purpose: To assess the relationship between visual acuity (VA) and diabetic macular oedema (DMO) in relation to the location of retinal thickening and the severity and duration of central macular thickening. Methods: Data from 584 eyes in 340 placebo-treated patients in the 3-years...... (Snellen equivalent = 20/125). Diabetic retinopathy and DMO status were assessed using stereo photographs. Results: Nearly one third of study eyes had foveal centre-involving DMO at the start of the trial. Sustained moderate visual loss was found in 36 eyes, most commonly associated with DMO at the centre...

  16. Vection and visually induced motion sickness: How are they related?

    Directory of Open Access Journals (Sweden)

    Behrang eKeshavarz

    2015-04-01

    Full Text Available The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (so-called vection, however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate whether or not vection is a necessary prerequisite for visually induced motion sickness (VIMS. That is, can there be visually induced motion sickness without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that may speak to this relationship (including theoretical accounts of vection and VIMS, and offer suggestions with respect to operationally defining and reporting these phenomena in future.

  17. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  18. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  19. Effect of Target Location on Dynamic Visual Acuity During Passive Horizontal Rotation

    Science.gov (United States)

    Appelbaum, Meghan; DeDios, Yiri; Kulecz, Walter; Peters, Brian; Wood, Scott

    2010-01-01

    The vestibulo-ocular reflex (VOR) generates eye rotation to compensate for potential retinal slip in the specific plane of head movement. Dynamic visual acuity (DVA) has been utilized as a functional measure of the VOR. The purpose of this study was to examine changes in accuracy and reaction time when performing a DVA task with targets offset from the plane of rotation, e.g. offset vertically during horizontal rotation. Visual acuity was measured in 12 healthy subjects as they moved a hand-held joystick to indicate the orientation of a computer-generated Landolt C "as quickly and accurately as possible." Acuity thresholds were established with optotypes presented centrally on a wall-mounted LCD screen at 1.3 m distance, first without motion (static condition) and then while oscillating at 0.8 Hz (DVA, peak velocity 60 deg/s). The effect of target location was then measured during horizontal rotation with the optotypes randomly presented in one of nine different locations on the screen (offset up to 10 deg). The optotype size (logMar 0, 0.2 or 0.4, corresponding to Snellen range 20/20 to 20/50) and presentation duration (150, 300 and 450 ms) were counter-balanced across five trials, each utilizing horizontal rotation at 0.8 Hz. Dynamic acuity was reduced relative to static acuity in 7 of 12 subjects by one step size. During the random target trials, both accuracy and reaction time improved proportional to optotype size. Accuracy and reaction time also improved between 150 ms and 300 ms presentation durations. The main finding was that both accuracy and reaction time varied as a function of target location, with greater performance decrements when acquiring vertical targets. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of motion. Both reaction time and accuracy are functionally relevant DVA parameters of VOR function.

  20. Visualization study of flow in axial flow inducer.

    Science.gov (United States)

    Lakshminarayana, B.

    1972-01-01

    A visualization study of the flow through a three ft dia model of a four bladed inducer, which is operated in air at a flow coefficient of 0.065, is reported in this paper. The flow near the blade surfaces, inside the rotating passages, downstream and upstream of the inducer is visualized by means of smoke, tufts, ammonia filament, and lampblack techniques. Flow is found to be highly three dimensional, with appreciable radial velocity throughout the entire passage. The secondary flows observed near the hub and annulus walls agree with qualitative predictions obtained from the inviscid secondary flow theory.

  1. A Location Aware Middleware Framework for Collaborative Visual Information Discovery and Retrieval

    Science.gov (United States)

    2017-09-14

    scalable location-aware distributed indexing to enable the leveraging of collaborative effort for the construction and maintenance of world-scale visual... Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1 Related Works...build an image-based database for road navigation, Google hires cars to drive and take pictures along roads . For this effort to have complete global

  2. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  3. Photosensitivity and visually induced seizures: review

    NARCIS (Netherlands)

    Parra, J.; Kalitzin, S.; Lopes da Silva, F.H.

    2005-01-01

    PURPOSE OF REVIEW: Interest in visually induced seizures has increased in recent years as a result of the increasing number of precipitants in our modern environment. This review addresses new developments in this field with special attention given to the emergence of new diagnostic, therapeutic and

  4. Visualization of cavitation bubbles induced by a laser pulse

    International Nuclear Information System (INIS)

    Testud-Giovanneschi, P.; Dufresne, D.; Inglesakis, G.

    1987-01-01

    The I.M.F.M. researchers working on Laser-Matter Interaction are studying the effects induced on matter by a pulsed radiation energy deposit. In this research, the emphasis is on the laser liquids interaction field and more particularly the cavitation induced by a laser pulse or ''optical-cavitation'' as termed by W. Lauterborn (1). For bubbles investigations, the visualizations form a basic diagnostic. This paper presents the experimental apparatus of formation of bubbles, the visualization apparatus and different typical examples of photographic recordings

  5. Visually Induced Dizziness in Children and Validation of the Pediatric Visually Induced Dizziness Questionnaire

    Directory of Open Access Journals (Sweden)

    Marousa Pavlou

    2017-12-01

    Full Text Available AimsTo develop and validate the Pediatric Visually Induced Dizziness Questionnaire (PVID and quantify the presence and severity of visually induced dizziness (ViD, i.e., symptoms induced by visual motion stimuli including crowds and scrolling computer screens in children.Methods169 healthy (female n = 89; recruited from mainstream schools, London, UK and 114 children with a primary migraine, concussion, or vestibular disorder diagnosis (female n = 62, aged 6–17 years, were included. Children with primary migraine were recruited from mainstream schools while children with concussion or vestibular disorder were recruited from tertiary balance centers in London, UK, and Pittsburgh, PA, USA. Children completed the PVID, which assesses the frequency of dizziness and unsteadiness experienced in specific environmental situations, and Strength and Difficulties Questionnaire (SDQ, a brief behavioral screening instrument.ResultsThe PVID showed high internal consistency (11 items; α = 0.90. A significant between-group difference was noted with higher (i.e., worse PVID scores for patients vs. healthy participants (U = 2,436.5, z = −10.719, p < 0.001; a significant difference was noted between individual patient groups [χ2(2 = 11.014, p = 0.004] but post hoc analysis showed no significant pairwise comparisons. The optimal cut-off score for discriminating between individuals with and without abnormal ViD levels was 0.45 out of 3 (sensitivity 83%, specificity 75%. Self-rated emotional (U = 2,730.0, z = −6.169 and hyperactivity (U = 3,445.0, z = −4.506 SDQ subscale as well as informant (U = 188.5, z = −3.916 and self-rated (U = 3,178.5, z = −5.083 total scores were significantly worse for patients compared to healthy participants (p < 0.001.ConclusionViD is common in children with a primary concussion, migraine, or vestibular diagnosis. The PVID is a valid measure for

  6. Humans use visual and remembered information about object location to plan pointing movements

    NARCIS (Netherlands)

    Brouwer, A.-M.; Knill, D.C.

    2009-01-01

    We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to

  7. A Visual Analysis Approach for Inferring Personal Job and Housing Locations Based on Public Bicycle Data

    Directory of Open Access Journals (Sweden)

    Xiaoying Shi

    2017-07-01

    Full Text Available Information concerning the home and workplace of residents is the basis of analyzing the urban job-housing spatial relationship. Traditional methods conduct time-consuming user surveys to obtain personal job and housing location information. Some new methods define rules to detect personal places based on human mobility data. However, because the travel patterns of residents are variable, simple rule-based methods are unable to generalize highly changing and complex travel modes. In this paper, we propose a visual analysis approach to assist the analyzer in inferring personal job and housing locations interactively based on public bicycle data. All users are first clustered to find potential commuting users. Then, several visual views are designed to find the key candidate stations for a specific user, and the visited temporal pattern of stations and the user’s hire behavior are analyzed, which helps with the inference of station semantic meanings. Finally, a number of users’ job and housing locations are detected by the analyzer and visualized. Our approach can manage the complex and diverse cycling habits of users. The effectiveness of the approach is shown through case studies based on a real-world public bicycle dataset.

  8. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    Science.gov (United States)

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Illusory conjunctions in simultanagnosia: coarse coding of visual feature location?

    Science.gov (United States)

    McCrea, Simon M; Buxbaum, Laurel J; Coslett, H Branch

    2006-01-01

    Simultanagnosia is a disorder characterized by an inability to see more than one object at a time. We report a simultanagnosic patient (ED) with bilateral posterior infarctions who produced frequent illusory conjunctions on tasks involving form and surface features (e.g., a red T) and form alone. ED also produced "blend" errors in which features of one familiar perceptual unit appeared to migrate to another familiar perceptual unit (e.g., "RO" read as "PQ"). ED often misread scrambled letter strings as a familiar word (e.g., "hmoe" read as "home"). Finally, ED's success in reporting two letters in an array was inversely related to the distance between the letters. These findings are consistent with the hypothesis that ED's illusory reflect coarse coding of visual feature location that is ameliorated in part by top-down information from object and word recognition systems; the findings are also consistent, however, with Treisman's Feature Integration Theory. Finally, the data provide additional support for the claim that the dorsal parieto-occipital cortex is implicated in the binding of visual feature information.

  10. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  11. Visualizing Vpr-induced G2 arrest and apoptosis.

    Directory of Open Access Journals (Sweden)

    Tomoyuki Murakami

    Full Text Available Vpr is an accessory protein of human immunodeficiency virus type 1 (HIV-1 with multiple functions. The induction of G2 arrest by Vpr plays a particularly important role in efficient viral replication because the transcriptional activity of the HIV-1 long terminal repeat is most active in G2 phase. The regulation of apoptosis by Vpr is also important for immune suppression and pathogenesis during HIV infection. However, it is not known whether Vpr-induced apoptosis depends on the ability of Vpr to induce G2 arrest, and the dynamics of Vpr-induced G2 arrest and apoptosis have not been visualized. We performed time-lapse imaging to examine the temporal relationship between Vpr-induced G2 arrest and apoptosis using HeLa cells containing the fluorescent ubiquitination-based cell cycle indicator2 (Fucci2. The dynamics of G2 arrest and subsequent long-term mitotic cell rounding in cells transfected with the Vpr-expression vector were visualized. These cells underwent nuclear mis-segregation after prolonged mitotic processes and then entered G1 phase. Some cells subsequently displayed evidence of apoptosis after prolonged mitotic processes and nuclear mis-segregation. Interestingly, Vpr-induced apoptosis was seldom observed in S or G2 phase. Likewise, visualization of synchronized HeLa/Fucci2 cells infected with an adenoviral vector expressing Vpr clearly showed that Vpr arrests the cell cycle at G2 phase, but does not induce apoptosis at S or G2 phase. Furthermore, time-lapse imaging of HeLa/Fucci2 cells expressing SCAT3.1, a caspase-3-sensitive fusion protein, clearly demonstrated that Vpr induces caspase-3-dependent apoptosis. Finally, to examine whether the effects of Vpr on G2 arrest and apoptosis were reversible, we performed live-cell imaging of a destabilizing domain fusion Vpr, which enabled rapid stabilization and destabilization by Shield1. The effects of Vpr on G2 arrest and subsequent apoptosis were reversible. This study is the first to

  12. Visually induced eye movements in Wallenberg's syndrome

    International Nuclear Information System (INIS)

    Kanayama, R.; Nakamura, T.; Ohki, M.; Kimura, Y.; Koike, Y.; Kato, I.

    1991-01-01

    Eighteen patients with Wallenberg's syndrome were investigated concerning visually induced eye movements. All results were analysed quantitatively using a computer. In 16 out of 18 patients, OKN slow-phase velocities were impaired, in the remaining 2 patients they were normal. All patients showed reduced visual suppression of caloric nystagmus during the slow-phase of nystagmus toward the lesion side, except 3 patients who showed normal visual suppression in both directions. CT scan failed to detect either the brainstem or the cerebellar lesions in any cases, but MRI performed on the most recent cases demonstrated the infractions clearly. These findings suggest that infractions are localized in the medulla in the patients of group A, but extend to the cerebellum as well as to the medulla in patients of group B. (au)

  13. Contingency blindness: location-identity binding mismatches obscure awareness of spatial contingencies and produce profound interference in visual working memory.

    Science.gov (United States)

    Fiacconi, Chris M; Milliken, Bruce

    2012-08-01

    The purpose of the present study was to highlight the role of location-identity binding mismatches in obscuring explicit awareness of a strong contingency. In a spatial-priming procedure, we introduced a high likelihood of location-repeat trials. Experiments 1, 2a, and 2b demonstrated that participants' explicit awareness of this contingency was heavily influenced by the local match in location-identity bindings. In Experiment 3, we sought to determine why location-identity binding mismatches produce such low levels of contingency awareness. Our results suggest that binding mismatches can interfere substantially with visual-memory performance. We attribute the low levels of contingency awareness to participants' inability to remember the critical location-identity binding in the prime on a trial-to-trial basis. These results imply a close interplay between object files and visual working memory.

  14. Visual assessments for Swisher County and Deaf Smith County locations, Palo Duro Basin, Texas

    International Nuclear Information System (INIS)

    1984-12-01

    The area of the Swisher and Deaf Smith County locations is characterized by vast open spaces with limited vertical relief and vegetative cover. The stream valleys and areas around the playa lakes provide the only significant topographical relief in either location, and the areas in range vegetation provide the only major contrast to the dominant land cover of agricultural crops. Tree stands occur almost exclusively in association with orchards, country clubs, farmsteads, and urban areas. Because of climatic conditions in the region, there are few permanent water bodies in either location. Grain elevators, farmsteads, and other cultural modifications (roads, utility lines, fence rows, etc.) are scattered throughout both locations, but they constitute a very small portion of the visible landscape. These features help provide scale in the landscape and also serve as visual landmarks

  15. VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.

    Science.gov (United States)

    Langner, Ricardo; Horak, Tom; Dachselt, Raimund

    2017-08-29

    We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.

  16. Neural correlates of visually induced self-motion illusion in depth.

    Science.gov (United States)

    Kovács, Gyula; Raabe, Markus; Greenlee, Mark W

    2008-08-01

    Optic-flow fields can induce the conscious illusion of self-motion in a stationary observer. Here we used functional magnetic resonance imaging to reveal the differential processing of self- and object-motion in the human brain. Subjects were presented a constantly expanding optic-flow stimulus, composed of disparate red-blue dots, viewed through red-blue glasses to generate a vivid percept of three-dimensional motion. We compared the activity obtained during periods of illusory self-motion with periods of object-motion percept. We found that the right MT+, precuneus, as well as areas located bilaterally along the dorsal part of the intraparietal sulcus and along the left posterior intraparietal sulcus were more active during self-motion perception than during object-motion. Additional signal increases were located in the depth of the left superior frontal sulcus, over the ventral part of the left anterior cingulate, in the depth of the right central sulcus and in the caudate nucleus/putamen. We found no significant deactivations associated with self-motion perception. Our results suggest that the illusory percept of self-motion is correlated with the activation of a network of areas, ranging from motion-specific areas to regions involved in visuo-vestibular integration, visual imagery, decision making, and introspection.

  17. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  18. Spatiotopic updating of visual feature information.

    Science.gov (United States)

    Zimmermann, Eckart; Weidner, Ralph; Fink, Gereon R

    2017-10-01

    Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects.

  19. Effects of Multimodal Displays About Threat Location on Target Acquisition and Attention to Visual and Auditory Communications

    National Research Council Canada - National Science Library

    Glumm, Monica M; Kehring, Kathy L; White, Timothy L

    2007-01-01

    This laboratory experiment examined the effects of paired sensory cues that indicate the location of targets on target acquisition performance, the recall of information presented in concurrent visual...

  20. Ensemble clustering in visual working memory biases location memories and reduces the Weber noise of relative positions.

    Science.gov (United States)

    Lew, Timothy F; Vul, Edward

    2015-01-01

    People seem to compute the ensemble statistics of objects and use this information to support the recall of individual objects in visual working memory. However, there are many different ways that hierarchical structure might be encoded. We examined the format of structured memories by asking subjects to recall the locations of objects arranged in different spatial clustering structures. Consistent with previous investigations of structured visual memory, subjects recalled objects biased toward the center of their clusters. Subjects also recalled locations more accurately when they were arranged in fewer clusters containing more objects, suggesting that subjects used the clustering structure of objects to aid recall. Furthermore, subjects had more difficulty recalling larger relative distances, consistent with subjects encoding the positions of objects relative to clusters and recalling them with magnitude-proportional (Weber) noise. Our results suggest that clustering improved the fidelity of recall by biasing the recall of locations toward cluster centers to compensate for uncertainty and by reducing the magnitude of encoded relative distances.

  1. Activation of serotonin 2A receptors underlies the psilocybin-induced effects on α oscillations, N170 visual-evoked potentials, and visual hallucinations.

    Science.gov (United States)

    Kometer, Michael; Schmidt, André; Jäncke, Lutz; Vollenweider, Franz X

    2013-06-19

    Visual illusions and hallucinations are hallmarks of serotonergic hallucinogen-induced altered states of consciousness. Although the serotonergic hallucinogen psilocybin activates multiple serotonin (5-HT) receptors, recent evidence suggests that activation of 5-HT2A receptors may lead to the formation of visual hallucinations by increasing cortical excitability and altering visual-evoked cortical responses. To address this hypothesis, we assessed the effects of psilocybin (215 μg/kg vs placebo) on both α oscillations that regulate cortical excitability and early visual-evoked P1 and N170 potentials in healthy human subjects. To further disentangle the specific contributions of 5-HT2A receptors, subjects were additionally pretreated with the preferential 5-HT2A receptor antagonist ketanserin (50 mg vs placebo). We found that psilocybin strongly decreased prestimulus parieto-occipital α power values, thus precluding a subsequent stimulus-induced α power decrease. Furthermore, psilocybin strongly decreased N170 potentials associated with the appearance of visual perceptual alterations, including visual hallucinations. All of these effects were blocked by pretreatment with the 5-HT2A antagonist ketanserin, indicating that activation of 5-HT2A receptors by psilocybin profoundly modulates the neurophysiological and phenomenological indices of visual processing. Specifically, activation of 5-HT2A receptors may induce a processing mode in which stimulus-driven cortical excitation is overwhelmed by spontaneous neuronal excitation through the modulation of α oscillations. Furthermore, the observed reduction of N170 visual-evoked potentials may be a key mechanism underlying 5-HT2A receptor-mediated visual hallucinations. This change in N170 potentials may be important not only for psilocybin-induced states but also for understanding acute hallucinatory states seen in psychiatric disorders, such as schizophrenia and Parkinson's disease.

  2. Study of Coal Burst Source Locations in the Velenje Colliery

    Directory of Open Access Journals (Sweden)

    Goran Vižintin

    2016-06-01

    Full Text Available The Velenje coal mine (VCM is situated on the largest Slovenian coal deposit and in one of the thickest layers of coal known in the world. The thickness of the coal layer causes problems for the efficiency of extraction, since the majority of mining operations is within the coal layer. The selected longwall coal mining method with specific geometry, increasing depth of excavations, changes in stress state and naturally given geomechanical properties of rocks induce seismic events. Induced seismic events can be caused by caving processes, blasting or bursts of coal or the surrounding rock. For 2.5D visualization, data of excavations, ash content and calorific value of coal samples, hanging wall and footwall occurrence, subsidence of the surface and coal burst source locations were collected. Data and interpolation methods available in software package Surfer®12 were statistically analyzed and a Kriging (KRG interpolation method was chosen. As a result 2.5D visualizations of coal bursts source locations with geomechanical properties of coal samples taken at different depth in the coal seam in the VCM were made with data-visualization packages Surfer®12 and Voxler®3.

  3. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization.

    Science.gov (United States)

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2017-12-01

    Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. This study contributed a new data processing framework to extract and integrate

  4. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization

    Science.gov (United States)

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2018-01-01

    Background Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. Objective In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. Methods The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. Results The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. Conclusion This study contributed a new

  5. Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness.

    Science.gov (United States)

    Cavanaugh, Matthew R; Huxlin, Krystel R

    2017-05-09

    To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Trained patients recovered ∼108 degrees 2 of vision on average, while untrained patients spontaneously improved over an area of ∼16 degrees 2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensitivity over ∼9 degrees 2 of their visual field. Worsening was minimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes in Humphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.

  6. Peripapillary Retinal Nerve Fiber Layer Thickness Corresponds to Drusen Location and Extent of Visual Field Defects in Superficial and Buried Optic Disc Drusen.

    Science.gov (United States)

    Malmqvist, Lasse; Wegener, Marianne; Sander, Birgit A; Hamann, Steffen

    2016-03-01

    Optic disc drusen (ODD) are hyaline deposits located within the optic nerve head. Peripapillary retinal nerve fiber layer (RNFL) thinning is associated with the high prevalence of visual field defects seen in ODD patients. The goal of this study was to investigate the characteristics of patients with ODD and to compare the peripapillary RNFL thickness to the extent of visual field defects and anatomic location (superficial or buried) of ODD. Retrospective, cross sectional study. A total of 149 eyes of 84 ODD patients were evaluated. Sixty-five percent were female and 76% had bilateral ODD. Of 149 eyes, 109 had superficial ODD and 40 had buried ODD. Peripapillary RNFL thinning was seen in 83.6% of eyes, where optical coherence tomography was performed (n = 61). Eyes with superficial ODD had greater mean peripapillary RNFL thinning (P ≤ 0.0001) and visual field defects (P = 0.002) than eyes with buried ODD. There was a correlation between mean peripapillary RNFL thinning and visual field defects as measured by perimetric mean deviation (R-0.66; P = 0.0001). The most frequent visual field abnormalities were arcuate and partial arcuate defects. Peripapillary RNFL thickness correlates with anatomic location (superficial or buried) of ODD. Frequency and extent of visual field defects corresponded with anatomic location of ODD and peripapillary RNFL thickness, suggesting increased axonal damage in patients with superficial ODD.

  7. Functional connectivity supporting the selective maintenance of feature-location binding in visual working memory

    Directory of Open Access Journals (Sweden)

    Sachiko eTakahama

    2014-06-01

    Full Text Available Information on an object’s features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC, hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS, DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010 demonstrated that the superior parietal lobule (SPL cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding.

  8. UV-blocking spectacle lens protects against UV-induced decline of visual performance.

    Science.gov (United States)

    Liou, Jyh-Cheng; Teng, Mei-Ching; Tsai, Yun-Shan; Lin, En-Chieh; Chen, Bo-Yie

    2015-01-01

    Excessive exposure to sunlight may be a risk factor for ocular diseases and reduced visual performance. This study was designed to examine the ability of an ultraviolet (UV)-blocking spectacle lens to prevent visual acuity decline and ocular surface disorders in a mouse model of UVB-induced photokeratitis. Mice were divided into 4 groups (10 mice per group): (1) a blank control group (no exposure to UV radiation), (2) a UVB/no lens group (mice exposed to UVB rays, but without lens protection), (3) a UVB/UV400 group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [UV400 coating]), and (4) a UVB/photochromic group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [photochromic coating]). We investigated UVB-induced changes in visual acuity and in corneal smoothness, opacity, and lissamine green staining. We also evaluated the correlation between visual acuity decline and changes to the corneal surface parameters. Tissue sections were prepared and stained immunohistochemically to evaluate the structural integrity of the cornea and conjunctiva. In blank controls, the cornea remained undamaged, whereas in UVB-exposed mice, the corneal surface was disrupted; this disruption significantly correlated with a concomitant decline in visual acuity. Both the UVB/UV400 and UVB/photochromic groups had sharper visual acuity and a healthier corneal surface than the UVB/no lens group. Eyes in both protected groups also showed better corneal and conjunctival structural integrity than unprotected eyes. Furthermore, there were fewer apoptotic cells and less polymorphonuclear leukocyte infiltration in corneas protected by the spectacle lenses. The model established herein reliably determines the protective effect of UV-blocking ophthalmic biomaterials, because the in vivo protection against UV-induced ocular damage and visual acuity decline was easily defined.

  9. Enhanced associative memory for colour (but not shape or location) in synaesthesia.

    OpenAIRE

    Pritchard Jamie; Rothen Nicolas; Coolbear Daniel; Ward Jamie

    2013-01-01

    People with grapheme colour synaesthesia have been shown to have enhanced memory on a range of tasks using both stimuli that induce synaesthesia (e.g. words) and more surprisingly stimuli that do not (e.g. certain abstract visual stimuli). This study examines the latter by using multi featured stimuli consisting of shape colour and location conjunctions (e.g. shape A+colour A+location A; shape B+colour B+location B) presented in a recognition memory paradigm. This enables distractor items to ...

  10. Visual quality analysis of femtosecond LASIK and iris location guided mechanical SBK for high myopia

    Directory of Open Access Journals (Sweden)

    Hong-Su Jiang

    2015-07-01

    Full Text Available AIM: To make a analysis of visual quality of iris location guided femtosecond laser assisted in situ keratomi(LASIKand iris location guided mechanical sub-bowman keratomileusis(SBKfor high myopia treatment. METHODS:Femtosecond LASIK(study groupwas performed in 102 eyes of 51 patients with high myopia and 70 eyes of 35 patients were received mechanical SBK(control groupfrom January to October 2013. The spherical refraction of all the patients was from -6.00~-9.50D. Best corrected visual acuity(BCVAof the patients was ≥1.0. Uncorrected visual acuity(UCVA, BCVA, thickness of cornea flap, contrast sensitivity function(CSFand senior ocular aberration were examined in these patients and follow-up was 1a. RESULTS: At 1a after surgery 94.1% UCVA in study group reached ≥1.0 and there was 94.3% in control group. There was no significant difference between two groups(P>0.05. Residual refraction of study group was -0.08±0.10 D and control group was -0.10±0.07 D. There was no significant difference of residual refraction between two groups(P>0.05. C12, C8 of senior ocular aberration and RMSH in study group was less than control group, amplification: 0.1642±0.0519 and 0.2229±0.0382(t=8.077, Pt=0.556, P>0.05. C8 was 0.0950±0.069 and 0.1858±0.095(t=7.261, Pt=12.801, PP>0.05.CONCLUSION: Femtosecond LASIK and mechanical SBK is effective for high myopia. Compared to mechanical SBK, femtosecond LASIK shows more advantages in the senior ocular aberration and visual quality. The cornea flap is more regular from central to peripheral area with femtosecond laser.

  11. Task set induces dynamic reallocation of resources in visual short-term memory.

    Science.gov (United States)

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  12. Visually induced eye movements in Wallenberg's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kanayama, R.; Nakamura, T.; Ohki, M.; Kimura, Y.; Koike, Y. (Dept. of Otolaryngology, Yamagata Univ. School of Medicine (Japan)); Kato, I. (Dept. of Otolaryngology, St. Marianna Univ. School of Medicine, Kawasaki (Japan))

    1991-01-01

    Eighteen patients with Wallenberg's syndrome were investigated concerning visually induced eye movements. All results were analysed quantitatively using a computer. In 16 out of 18 patients, OKN slow-phase velocities were impaired, in the remaining 2 patients they were normal. All patients showed reduced visual suppression of caloric nystagmus during the slow-phase of nystagmus toward the lesion side, except 3 patients who showed normal visual suppression in both directions. CT scan failed to detect either the brainstem or the cerebellar lesions in any cases, but MRI performed on the most recent cases demonstrated the infractions clearly. These findings suggest that infractions are localized in the medulla in the patients of group A, but extend to the cerebellum as well as to the medulla in patients of group B. (au).

  13. Location-specific effects of attention during visual short-term memory maintenance.

    Science.gov (United States)

    Matsukura, Michi; Cosman, Joshua D; Roper, Zachary J J; Vatterott, Daniel B; Vecera, Shaun P

    2014-06-01

    Recent neuroimaging studies suggest that early sensory areas such as area V1 are recruited to actively maintain a selected feature of the item held in visual short-term memory (VSTM). These findings raise the possibility that visual attention operates in similar manners across perceptual and memory representations to a certain extent, despite memory-level and perception-level selections are functionally dissociable. If VSTM operates by retaining "reasonable copies" of scenes constructed during sensory processing (Serences et al., 2009, p. 207, the sensory recruitment hypothesis), then it is possible that selective attention can be guided by both exogenous (peripheral) and endogenous (central) cues during VSTM maintenance. Yet, the results from the previous studies that examined this issue are inconsistent. In the present study, we investigated whether attention can be directed to a specific item's location represented in VSTM with the exogenous cue in a well-controlled setting. The results from the four experiments suggest that, as observed with the endogenous cue, the exogenous cue can efficiently guide selective attention during VSTM maintenance. The finding is not only consistent with the sensory recruitment hypothesis but also validates the legitimacy of the exogenous cue use in past and future studies. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Cortical deactivation induced by visual stimulation in human slow-wave sleep

    DEFF Research Database (Denmark)

    Born, Alfred Peter; Law, Ian; Lund, Torben E

    2002-01-01

    . It is unresolved whether this negative BOLD response pattern is of developmental neurobiological origin particular to a given age or to a general effect of sleep or sedative drugs. To further elucidate this issue, we used fMRI and positron emission tomography (PET) to study the brain activation pattern during......It has previously been demonstrated that sleeping and sedated young children respond with a paradoxical decrease in the blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) signal in the rostro-medial occipital visual cortex during visual stimulation...... visual stimulation in spontaneously sleeping adult volunteers. In five sleeping volunteers fMRI studies confirmed a robust signal decrease during stimulation in the rostro-medial occipital cortex. A similar relative decrease at the same location was found during visual stimulation...

  15. Metoprolol-induced visual hallucinations: a case series

    Directory of Open Access Journals (Sweden)

    Goldner Jonathan A

    2012-02-01

    Full Text Available Abstract Introduction Metoprolol is a widely used beta-adrenergic blocker that is commonly prescribed for a variety of cardiovascular syndromes and conditions. While central nervous system adverse effects have been well-described with most beta-blockers (especially lipophilic agents such as propranolol, visual hallucinations have been only rarely described with metoprolol. Case presentations Case 1 was an 84-year-old Caucasian woman with a history of hypertension and osteoarthritis, who suffered from visual hallucinations which she described as people in her bedroom at night. They would be standing in front of the bed or sitting on chairs watching her when she slept. Numerous medications were stopped before her physician realized the metoprolol was the causative agent. The hallucinations resolved only after discontinuation of this medication. Case 2 was a 62-year-old Caucasian man with an inferior wall myocardial infarction complicated by cardiac arrest, who was successfully resuscitated and discharged from the hospital on metoprolol. About 18 months after discharge, he related to his physician that he had been seeing dead people at night. He related his belief that since he 'had died and was brought back to life', he was now seeing people from the after-life. Upon discontinuation of the metoprolol the visual disturbances resolved within several days. Case 3 was a 68 year-old Caucasian woman with a history of severe hypertension and depression, who reported visual hallucinations at night for years while taking metoprolol. These included awakening during the night with people in her bedroom and seeing objects in her room turn into animals. After a new physician switched her from metoprolol to atenolol, the visual hallucinations ceased within four days. Conclusion We suspect that metoprolol-induced visual hallucinations may be under-recognized and under-reported. Patients may frequently fail to acknowledge this adverse effect believing that they

  16. Working memory capacity accounts for the ability to switch between object-based and location-based allocation of visual attention.

    Science.gov (United States)

    Bleckley, M Kathryn; Foster, Jeffrey L; Engle, Randall W

    2015-04-01

    Bleckley, Durso, Crutchfield, Engle, and Khanna (Psychonomic Bulletin & Review, 10, 884-889, 2003) found that visual attention allocation differed between groups high or low in working memory capacity (WMC). High-span, but not low-span, subjects showed an invalid-cue cost during a letter localization task in which the letter appeared closer to fixation than the cue, but not when the letter appeared farther from fixation than the cue. This suggests that low-spans allocated attention as a spotlight, whereas high-spans allocated their attention to objects. In this study, we tested whether utilizing object-based visual attention is a resource-limited process that is difficult for low-span individuals. In the first experiment, we tested the uses of object versus location-based attention with high and low-span subjects, with half of the subjects completing a demanding secondary load task. Under load, high-spans were no longer able to use object-based visual attention. A second experiment supported the hypothesis that these differences in allocation were due to high-spans using object-based allocation, whereas low-spans used location-based allocation.

  17. The antisaccade task: visual distractors elicit a location-independent planning 'cost'.

    Science.gov (United States)

    DeSimone, Jesse C; Everling, Stefan; Heath, Matthew

    2015-01-01

    The presentation of a remote - but not proximal - distractor concurrent with target onset increases prosaccade reaction times (RT) (i.e., the remote distractor effect: RDE). The competitive integration model asserts that the RDE represents the time required to resolve the conflict for a common saccade threshold between target- and distractor-related saccade generating commands in the superior colliculus. To our knowledge however, no previous research has examined whether remote and proximal distractors differentially influence antisaccade RTs. This represents a notable question because antisaccades require decoupling of the spatial relations between stimulus and response (SR) and therefore provide a basis for determining whether the sensory- and/or motor-related features of a distractor influence response planning. Participants completed pro- and antisaccades in a target-only condition and conditions wherein the target was concurrently presented with a proximal or remote distractor. As expected, prosaccade RTs elicited a reliable RDE. In contrast, antisaccade RTs were increased independent of the distractor's spatial location and the magnitude of the effect was comparable across each distractor location. Thus, distractor-related antisaccade RT costs are not accounted for by a competitive integration between conflicting saccade generating commands. Instead, we propose that a visual distractor increases uncertainty related to the evocation of the response-selection rule necessary for decoupling SR relations.

  18. The visual system prioritizes locations near corners of surfaces (not just locations near a corner).

    Science.gov (United States)

    Bertamini, Marco; Helmy, Mai; Bates, Daniel

    2013-11-01

    When a new visual object appears, attention is directed toward it. However, some locations along the outline of the new object may receive more resources, perhaps as a consequence of their relative importance in describing its shape. Evidence suggests that corners receive enhanced processing, relative to the straight edges of an outline (corner enhancement effect). Using a technique similar to that in an original study in which observers had to respond to a probe presented near a contour (Cole et al. in Journal of Experimental Psychology: Human Perception and Performance 27:1356-1368, 2001), we confirmed this effect. When figure-ground relations were manipulated using shaded surfaces (Exps. 1 and 2) and stereograms (Exps. 3 and 4), two novel aspects of the phenomenon emerged: We found no difference between corners perceived as being convex or concave, and we found that the enhancement was stronger when the probe was perceived as being a feature of the surface that the corner belonged to. Therefore, the enhancement is not based on spatial aspects of the regions in the image, but critically depends on figure-ground stratification, supporting the link between the prioritization of corners and the representation of surface layout.

  19. Recognition-induced forgetting of faces in visual long-term memory.

    Science.gov (United States)

    Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M

    2017-10-01

    Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.

  20. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  1. How visual short-term memory maintenance modulates subsequent visual aftereffects.

    Science.gov (United States)

    Saad, Elyana; Silvanto, Juha

    2013-05-01

    Prolonged viewing of a visual stimulus can result in sensory adaptation, giving rise to perceptual phenomena such as the tilt aftereffect (TAE). However, it is not known if short-term memory maintenance induces such effects. We examined how visual short-term memory (VSTM) maintenance modulates the strength of the TAE induced by subsequent visual adaptation. We reasoned that if VSTM maintenance induces aftereffects on subsequent encoding of visual information, then it should either enhance or reduce the TAE induced by a subsequent visual adapter, depending on the congruency of the memory cue and the adapter. Our results were consistent with this hypothesis and thus indicate that the effects of VSTM maintenance can outlast the maintenance period.

  2. The primary visual cortex in the neural circuit for visual orienting

    Science.gov (United States)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  3. Auditory and visual capture during focused visual attention

    OpenAIRE

    Koelewijn, T.; Bronkhorst, A.W.; Theeuwes, J.

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, th...

  4. Toward unraveling reading-related modulations of tDCS-induced neuroplasticity in the human visual cortex.

    OpenAIRE

    Antal, Andrea; Ambrus, Géza Gergely; Chaieb, Leila

    2014-01-01

    Stimulation using weak electrical direct currents has shown to be capable of inducing polarity-dependent diminutions or elevations in motor and visual cortical excitability. The aim of the present study was to test if reading during transcranial direct current stimulation (tDCS) is able to modify stimulation-induced plasticity in the visual cortex. Phosphene thresholds (PTs) in 12 healthy subjects were recorded before and after 10 min of anodal, cathodal, and sham tDCS in combination with rea...

  5. Towards unravelling reading-related modulations of tDCS-induced neuroplasticity in the human visual cortex

    Directory of Open Access Journals (Sweden)

    Andrea eAntal

    2014-06-01

    Full Text Available Stimulation using weak electrical direct currents has shown to be capable of inducing polarity dependent diminutions or elevations in motor and visual cortical excitability. The aim of the present study was to test if reading during transcranial direct current stimulation (tDCS is able to modify stimulation-induced plasticity in the visual cortex. Phosphene thresholds (PT in 12 healthy subjects were recorded before and after 10 minutes of anodal, cathodal and sham tDCS in combination with reading. Reading alone decreased PTs significantly, compared to the sham tDCS condition without reading. Interestingly, after both anodal and cathodal stimulation there was a tendency toward smaller PTs. Our results support the observation that tDCS-induced plasticity is highly dependent on the cognitive state of the subject during stimulation, not only in the case of motor cortex but also in the case of visual cortex stimulation.

  6. Inducing Sadness and Anxiousness through Visual Media: Measurement Techniques and Persistence

    OpenAIRE

    Kuijsters, Andre; Redi, Judith; de Ruyter, Boris; Heynderickx, Ingrid

    2016-01-01

    textabstractThe persistence of negative moods (sadness and anxiousness) induced by three visual Mood Induction Procedures (MIP) was investigated. The evolution of the mood after the MIP was monitored for a period of 8 min with the Self-Assessment Manikin (SAM; every 2 min) and with recordings of skin conductance level (SCL) and electrocardiography (ECG). The SAM pleasure ratings showed that short and longer film fragments were effective in inducing a longer lasting negative mood, whereas the ...

  7. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    Science.gov (United States)

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  8. Target locations in visual field and character recognition by students of Chinese.

    Science.gov (United States)

    Chen, Yuan-Ho; Hsu, Sheng-Hsiung

    2003-02-01

    The potential influence of target location in a visual field on search should be considered in layouts of control panels and advertisements. This investigation was done to verify the assumption that the upper-left portion of a page or its equivalent naturally attracts the attention of the viewer. Exp. 1 used a tachistoscope to test which of eight Chinese characters first attracted the attention of viewers. The eight Chinese characters are arranged in a square and a circular configuration. In the square layout, a large square (18 cm x 18 cm) was first conceptually subdivided into nine equal parts (6 cm x 6 cm). Then, the eight Chinese characters were put in the center of each part, leaving the central part blank. In the circular layout, the same Chinese characters were symmetrically placed on the conceptual circumference (r = 6 cm) of a circle within a large square. Exp. 2 was a paper-and-pencil test. An embedded-fault-character-search was used to examine the location of the first faulty character discovered by the subjects. 60 college students and 36 schoolchildren were selected as subjects for the tachistoscopic experiment and paper-and-pencil test. Finally, five graduate students participated in Exp. 3 in which an eye camera registered subjects' eye movements to measured distribution of durations of looking over eight locations. The measurements indicated a slight predominance of the upper-left portion for college students and graduate students, and a slight predominance of the upper-right portion for schoolchildren.

  9. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  10. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    Science.gov (United States)

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2009-01-01

    The case of a 23-year-old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was sent into a drug-induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise largely recovered. Neuropsychological tests revealed deficits in fast visual detection tasks and non-verbal learning and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. At 4 months after the accident, she developed a psychological reaction consisting of nightmares, with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning strike were retrospectively retraced PMID:21734915

  11. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    Science.gov (United States)

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  12. Inducing sadness and anxiousness through visual media: Measurement techniques and persistence

    NARCIS (Netherlands)

    Kuijsters, A.; Redi, J.; Ruyter, B.E.R. de; Heynderickx, I.

    2016-01-01

    The persistence of negative moods (sadness and anxiousness) induced by three visual Mood Induction Procedures (MIP) was investigated. The evolution of the mood after the MIP was monitored for a period of 8 minutes with the Self-Assessment Manikin (every 2 minutes) and with recordings of skin

  13. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Directory of Open Access Journals (Sweden)

    Roger W Li

    2011-08-01

    Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia

  14. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Science.gov (United States)

    Li, Roger W; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M

    2011-08-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40-80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other

  15. Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia

    Science.gov (United States)

    Li, Roger W.; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M.

    2011-01-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15–61 y; visual acuity: 20/25–20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40–80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps

  16. Brain atrophy in the visual cortex and thalamus induced by severe stress in animal model.

    Science.gov (United States)

    Yoshii, Takanobu; Oishi, Naoya; Ikoma, Kazuya; Nishimura, Isao; Sakai, Yuki; Matsuda, Kenichi; Yamada, Shunji; Tanaka, Masaki; Kawata, Mitsuhiro; Narumoto, Jin; Fukui, Kenji

    2017-10-06

    Psychological stress induces many diseases including post-traumatic stress disorder (PTSD); however, the causal relationship between stress and brain atrophy has not been clarified. Applying single-prolonged stress (SPS) to explore the global effect of severe stress, we performed brain magnetic resonance imaging (MRI) acquisition and Voxel-based morphometry (VBM). Significant atrophy was detected in the bilateral thalamus and right visual cortex. Fluorescent immunohistochemistry for Iba-1 as the marker of activated microglia indicates regional microglial activation as stress-reaction in these atrophic areas. These data certify the impact of severe psychological stress on the atrophy of the visual cortex and the thalamus. Unexpectedly, these results are similar to chronic neuropathic pain rather than PTSD clinical research. We believe that some sensitisation mechanism from severe stress-induced atrophy in the visual cortex and thalamus, and the functional defect of the visual system may be a potential therapeutic target for stress-related diseases.

  17. Inducing sadness and anxiousness through visual media: measurement techniques and persistence

    NARCIS (Netherlands)

    A. Kuijsters (Andre); J.A. Redi (Judith); B. de Ruyter (Boris); I. Heynderickx (Ingrid)

    2016-01-01

    textabstractThe persistence of negative moods (sadness and anxiousness) induced by three visual Mood Induction Procedures (MIP) was investigated. The evolution of the mood after the MIP was monitored for a period of 8 min with the Self-Assessment Manikin (SAM; every 2 min) and with recordings of

  18. Testing visual short-term memory of pigeons (Columba livia) and a rhesus monkey (Macaca mulatta) with a location change detection task.

    Science.gov (United States)

    Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A

    2013-09-01

    Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.

  19. Modulation of induced gamma band activity in the human EEG by attention and visual information processing.

    Science.gov (United States)

    Müller, M M; Gruber, T; Keil, A

    2000-12-01

    Here we present a series of four studies aimed to investigate the link between induced gamma band activity in the human EEG and visual information processing. We demonstrated and validated the modulation of spectral gamma band power by spatial selective visual attention. When subjects attended to a certain stimulus, spectral power was increased as compared to when the same stimulus was ignored. In addition, we showed a shift in spectral gamma band power increase to the contralateral hemisphere when subjects shifted their attention to one visual hemifield. The following study investigated induced gamma band activity and the perception of a Gestalt. Ambiguous rotating figures were used to operationalize the law of good figure (gute Gestalt). We found increased gamma band power at posterior electrode sites when subjects perceived an object. In the last experiment we demonstrated a differential hemispheric gamma band activation when subjects were confronted with emotional pictures. Results of the present experiments in combination with other studies presented in this volume are supportive for the notion that induced gamma band activity in the human EEG is closely related to visual information processing and attentional perceptual mechanisms.

  20. Inducing sadness and anxiousness through visual media: measurement techniques and persistence

    Directory of Open Access Journals (Sweden)

    Andre Kuijsters

    2016-08-01

    Full Text Available The persistence of negative moods (sadness and anxiousness induced by three visual Mood Induction Procedures (MIP was investigated. The evolution of the mood after the MIP was monitored for a period of 8 minutes with the Self-Assessment Manikin (every 2 minutes and with recordings of skin conductance level (SCL and electrocardiography (ECG. The SAM pleasure ratings showed that short and longer film fragments were effective in inducing a longer lasting negative mood, whereas the negative mood induced by the IAPS slideshow was short lived. The induced arousal during the anxious MIPs diminished quickly after the mood induction; nevertheless, the SCL data suggest longer lasting arousal effects for both movies. The decay of the induced mood follows a logarithmic function; diminishing quickly in the first minutes, thereafter returning slowly back to baseline. These results reveal that caution is needed when investigating the effects of the induced mood on a task or the effect of interventions on induced moods, because the induced mood diminishes quickly after the mood induction.

  1. Inducing Sadness and Anxiousness through Visual Media: Measurement Techniques and Persistence.

    Science.gov (United States)

    Kuijsters, Andre; Redi, Judith; de Ruyter, Boris; Heynderickx, Ingrid

    2016-01-01

    The persistence of negative moods (sadness and anxiousness) induced by three visual Mood Induction Procedures (MIP) was investigated. The evolution of the mood after the MIP was monitored for a period of 8 min with the Self-Assessment Manikin (SAM; every 2 min) and with recordings of skin conductance level (SCL) and electrocardiography (ECG). The SAM pleasure ratings showed that short and longer film fragments were effective in inducing a longer lasting negative mood, whereas the negative mood induced by the IAPS slideshow was short lived. The induced arousal during the anxious MIPs diminished quickly after the mood induction; nevertheless, the SCL data suggest longer lasting arousal effects for both movies. The decay of the induced mood follows a logarithmic function; diminishing quickly in the first minutes, thereafter returning slowly back to baseline. These results reveal that caution is needed when investigating the effects of the induced mood on a task or the effect of interventions on induced moods, because the induced mood diminishes quickly after the mood induction.

  2. Transformations of visual memory induced by implied motions of pattern elements.

    Science.gov (United States)

    Finke, R A; Freyd, J J

    1985-10-01

    Four experiments measured distortions in short-term visual memory induced by displays depicting independent translations of the elements of a pattern. In each experiment, observers saw a sequence of 4 dot patterns and were instructed to remember the third pattern and to compare it with the fourth. The first three patterns depicted translations of the dots in consistent, but separate directions. Error rates and reaction times for rejecting the fourth pattern as different from the third were substantially higher when the dots in that pattern were displaced slightly forward, in the same directions as the implied motions, compared with when the dots were displaced in the opposite, backward directions. These effects showed little variation across interstimulus intervals ranging from 250 to 2,000 ms, and did not depend on whether the displays gave rise to visual apparent motion. However, they were eliminated when the dots in the fourth pattern were displaced by larger amounts in each direction, corresponding to the dot positions in the next and previous patterns in the same inducing sequence. These findings extend our initial report of the phenomenon of "representational momentum" (Freyd & Finke, 1984a), and help to rule out alternatives to the proposal that visual memories tend to undergo, at least to some extent, the transformations implied by a prior sequence of observed events.

  3. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    Science.gov (United States)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  4. Visual short-term memory load suppresses temporo-parietal junction activity and induces inattentional blindness.

    Science.gov (United States)

    Todd, J Jay; Fougnie, Daryl; Marois, René

    2005-12-01

    The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.

  5. Pathophysiology of visual disorders induced by phosphodiesterase inhibitors in the treatment of erectile dysfunction

    Directory of Open Access Journals (Sweden)

    Moschos MM

    2016-10-01

    Full Text Available Marilita M Moschos, Eirini Nitoda 1st Department of Ophthalmology, Medical School, National & Kapodistrian University of Athens, Athens, Greece Aim: The aim of this review was to summarize the ocular action of the most common phosphodiesterase (PDE inhibitors used for the treatment of erectile dysfunction and the subsequent visual disorders.Method: This is a literature review of several important articles focusing on the pathophysiology of visual disorders induced by PDE inhibitors.Results: PDE inhibitors have been associated with ocular side effects, including changes in color vision and light perception, blurred vision, transient alterations in electroretinogram (ERG, conjunctival hyperemia, ocular pain, and photophobia. Sildenafil and tadalafil may induce reversible increase in intraocular pressure and be involved in the development of nonarteritic ischemic optic neuropathy. Reversible idiopathic serous macular detachment, central serous chorioretinopathy, and ERG disturbances have been related to the significant impact of sildenafil and tadalafil on retinal perfusion.Discussion: So far, PDE inhibitors do not seem to cause permanent toxic effects on chorioretinal tissue and photoreceptors. However, physicians should write down any visual symptom observed during PDE treatment and refer the patients to ophthalmologists. Keywords: erectile dysfunction, pathophysiological mechanisms, phosphodiesterase inhibitors, PDE5, visual disorders

  6. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    Science.gov (United States)

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  7. Visual Perception by Drivers of the Advertisements Located at Selected Major Routes

    Science.gov (United States)

    Bichajło, Lesław

    2017-10-01

    This article characterizes the research based on the analysis of the eye fixation points on the advertisements. The research has been realized in real road and traffic conditions. The group of 12 drivers was equipped with the glasses occulometric measurement system mounted on the driver’s head. The participants were driving their private cars. The analysis was concentrated on the fixations on the advertisement tables located along the selected national roads in Rzeszów area (Poland). For better recognition if the advertisements have distracted the drivers the number of fixations on the advertisements has been compared with the fixations on the road signs. The active drivers have observed many visual attractors like advertisements, road signs and cars being ahead and on another lane. Passive drivers have low number of fixations on road signs and advertisements. Their fixations typically have been localized on survey and they probably used the peripheral vision in order to recognition of road sign shapes. The results show, that: the percentage of fixations on the advertisement and road signs is different for each participants; the highest percentage of fixated advertisements was on the section with small number of advertisements, but in the city area, when a group of advertisements was on the road, the participants selected some of them, yet no participant fixated all advertisements localized in a small distance between them; the single advertisement visible from the long distance strongly attracts the visual perception; the percentage of the fixated advertisements was higher than road signs.

  8. A lysosome-locating and acidic pH-activatable fluorescent probe for visualizing endogenous H2O2 in lysosomes.

    Science.gov (United States)

    Liu, Jun; Zhou, Shunqing; Ren, Jing; Wu, Chuanliu; Zhao, Yibing

    2017-11-20

    There is increasing evidence indicating that lysosomal H 2 O 2 is closely related to autophagy and apoptotic pathways under both physiological and pathological conditions. Therefore, fluorescent probes that can be exploited to visualize H 2 O 2 in lysosomes are potential tools for exploring diverse roles of H 2 O 2 in cells. However, functional exploration of lysosomal H 2 O 2 is limited by the lack of fluorescent probes capable of compatibly sensing H 2 O 2 under weak acidic conditions (pH = 4.5) of lysosomes. Lower spatial resolution of the fluorescent visualization of lysosomal H 2 O 2 might be caused by the interference of signals from cytosolic and mitochondrial H 2 O 2 , as well as the non-specific distribution of the probes in cells. In this work, we developed a lysosome-locating and acidic-pH-activatable fluorescent probe for the detection and visualization of H 2 O 2 in lysosomes, which consists of a H 2 O 2 -responsive boronate unit, a lysosome-locating morpholine group, and a pH-activatable benzorhodol fluorophore. The response of the fluorescent probe to H 2 O 2 is significantly more pronounced under acidic pH conditions than that under neutral pH conditions. Notably, the present probe enables the fluorescence sensing of endogenous lysosomal H 2 O 2 in living cells without external stimulations, with signal interference from the cytoplasm and other intracellular organelles being negligible.

  9. Visualization system on ITBL

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2004-01-01

    Visualization systems PATRAS/ITBL and AVS/ITBL, which are based on visualization software PATRAS and AVS/Express respectively, have been developed on a global, heterogeneous computing environment, Information Technology Based Laboratory (ITBL). PATRAS/ITBL allows for real-time visualization of the numerical results acquired from coupled multi-physics numerical simulations, executed on different hosts situated in remote locations. AVS/ITBL allow for post processing visualization. The scientific data located in remote sites may be selected and visualized on a web browser installed in a user terminal. The global structure and main functions of these systems are presented. (author)

  10. Fragile X Mental Retardation Protein Is Required to Maintain Visual Conditioning-Induced Behavioral Plasticity by Limiting Local Protein Synthesis.

    Science.gov (United States)

    Liu, Han-Hsuan; Cline, Hollis T

    2016-07-06

    Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced

  11. Covert oculo-manual coupling induced by visually guided saccades.

    Directory of Open Access Journals (Sweden)

    Luca eFalciati

    2013-10-01

    Full Text Available Hand pointing to objects under visual guidance is one of the most common motor behaviors in everyday life. In natural conditions, gaze and arm movements are commonly aimed at the same target and the accuracy of both systems is considerably enhanced if eye and hand move together. Evidence supports the viewpoint that gaze and limb control systems are not independent but at least partially share a common neural controller. The aim of the present study was to verify whether a saccade execution induces excitability changes in the upper-limb corticospinal system (CSS, even in the absence of a manual response. This effect would provide evidence for the existence of a common drive for ocular and arm motor systems during fast aiming movements. Single-pulse TMS was applied to the left motor cortex of 19 subjects during a task involving visually guided saccades, and motor evoked potentials (MEPs induced in hand and wrist muscles of the contralateral relaxed arm were recorded. Subjects had to make visually guided saccades to one of 6 positions along the horizontal meridian (±5°, ±10° or ±15°. During each trial, TMS was randomly delivered at one of 3 different time delays: shortly after the end of the saccade or 300 ms or 540 ms after saccade onset. Fast eye movements towards a peripheral target were accompanied by changes in upper-limb CSS excitability. MEP amplitude was highest immediately after the end of the saccade and gradually decreased at longer TMS delays. In addition to the change in overall CSS excitability, MEPs were specifically modulated in different muscles, depending on the target position and the TMS delay. By applying a simple model of a manual pointing movement, we demonstrated that the observed changes in CSS excitability are compatible with the facilitation of an arm motor program for a movement aimed at the same target of the gaze. These results provide evidence in favor of the existence of a common drive for both eye and arm

  12. The efficacy of airflow and seat vibration on reducing visually induced motion sickness

    NARCIS (Netherlands)

    D’Amour, Sarah; Bos, Jelte E.; Keshavarz, Behrang

    2017-01-01

    Visually induced motion sickness (VIMS) is a well-known sensation in virtual environments and simulators, typically characterized by a variety of symptoms such as pallor, sweating, dizziness, fatigue, and/or nausea. Numerous methods to reduce VIMS have been previously introduced; however, a reliable

  13. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  14. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  16. Auditory and visual capture during focused visual attention

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.W.; Theeuwes, J.

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies

  17. Visual characterization and quantitative measurement of artemisinin-induced DNA breakage

    Energy Technology Data Exchange (ETDEWEB)

    Cai Huaihong [Bionanotechnology Lab, and Department of Chemistry, Jinan University, Guangzhou 510632 (China); Yang Peihui [Bionanotechnology Lab, and Department of Chemistry, Jinan University, Guangzhou 510632 (China)], E-mail: typh@jnu.edu.cn; Chen Jianan [Bionanotechnology Lab, and Department of Chemistry, Jinan University, Guangzhou 510632 (China); Liang Zhihong [Experiment and Technology Center, Jinan University, Guangzhou 510632 (China); Chen Qiongyu [Institute of Genetic Engineering, Jinan University, Guangzhou 510632 (China); Cai Jiye [Bionanotechnology Lab, and Department of Chemistry, Jinan University, Guangzhou 510632 (China)], E-mail: tjycai@jnu.edu.cn

    2009-05-01

    DNA conformational change and breakage induced by artemisinin, a traditional Chinese herbal medicine, have been visually characterized and quantitatively measured by the multiple tools of electrochemistry, UV-vis absorption spectroscopy, atomic force microscopy (AFM), and DNA electrophoresis. Electrochemical and spectroscopic results confirm that artemisinin can intercalate into DNA double helix, which causes DNA conformational changes. AFM imaging vividly demonstrates uneven DNA strand breaking induced by QHS interaction. To assess these DNA breakages, quantitative analysis of the extent of DNA breakage has been performed by analyzing AFM images. Basing on the statistical analysis, the occurrence of DNA breaks is found to depend on the concentration of artemisinin. DNA electrophoresis further validates that the intact DNA molecules are unwound due to the breakages occur at the single strands. A reliable scheme is proposed to explain the process of artemisinin-induced DNA cleavage. These results can provide further information for better understanding the anticancer activity of artemisinin.

  18. Glaucoma Diagnostic Capabilities of Foveal Avascular Zone Parameters Using Optical Coherence Tomography Angiography According to Visual Field Defect Location.

    Science.gov (United States)

    Kwon, Junki; Choi, Jaewan; Shin, Joong Won; Lee, Jiyun; Kook, Michael S

    2017-12-01

    To assess the diagnostic ability of foveal avascular zone (FAZ) parameters to discriminate glaucomatous eyes with visual field defects (VFDs) in different locations (central vs. peripheral) from normal eyes. Totally, 125 participants were separated into 3 groups: normal (n=45), glaucoma with peripheral VFD (PVFD, n=45), and glaucoma with central VFD (CVFD, n=35). The FAZ area, perimeter, and circularity and parafoveal vessel density were calculated from optical coherence tomography angiography images. The diagnostic ability of the FAZ parameters and other structural parameters was determined according to glaucomatous VFD location. Associations between the FAZ parameters and central visual function were evaluated. A larger FAZ area and longer FAZ perimeter were observed in the CVFD group than in the PVFD and normal groups. The FAZ area, perimeter, and circularity were better in differentiating glaucomatous eyes with CVFDs from normal eyes [areas under the receiver operating characteristic curves (AUC), 0.78 to 0.88] than in differentiating PVFDs from normal eyes (AUC, 0.51 to 0.64). The FAZ perimeter had a similar AUC value to the circumpapillary retinal nerve fiber layer and macular ganglion cell-inner plexiform layer thickness for differentiating eyes with CVFDs from normal eyes (all P>0.05, the DeLong test). The FAZ area was significantly correlated with central visual function (β=-112.7, P=0.035, multivariate linear regression). The FAZ perimeter had good diagnostic capability in differentiating glaucomatous eyes with CVFDs from normal eyes, and may be a potential diagnostic biomarker for detecting glaucomatous patients with CVFDs.

  19. Visual perceptual load induces inattentional deafness.

    Science.gov (United States)

    Macdonald, James S P; Lavie, Nilli

    2011-08-01

    In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.

  20. Extraretinal induced visual sensations during IMRT of the brain.

    Science.gov (United States)

    Wilhelm-Buchstab, Timo; Buchstab, Barbara Myrthe; Leitzen, Christina; Garbe, Stephan; Müdder, Thomas; Oberste-Beulmann, Susanne; Sprinkart, Alois Martin; Simon, Birgit; Nelles, Michael; Block, Wolfgang; Schoroth, Felix; Schild, Hans Heinz; Schüller, Heinrich

    2015-01-01

    We observed visual sensations (VSs) in patients undergoing intensity modulated radiotherapy (IMRT) of the brain without the beam passing through ocular structures. We analyzed this phenomenon especially with regards to reproducibility, and origin. Analyzed were ten consecutive patients (aged 41-71 years) with glioblastoma multiforme who received pulsed IMRT (total dose 60Gy) with helical tomotherapy (TT). A megavolt-CT (MVCT) was performed daily before treatment. VSs were reported and recorded using a triggered event recorder. The frequency of VSs was calculated and VSs were correlated with beam direction and couch position. Subjective patient perception was plotted on an 8x8 visual field (VF) matrix. Distance to the orbital roof (OR) from the first beam causing a VS was calculated from the Dicom radiation therapy data and MVCT data. During 175 treatment sessions (average 17.5 per patient) 5959 VSs were recorded and analyzed. VSs occurred only during the treatment session not during the MVCTs. Plotting events over time revealed patient-specific patterns. The average cranio-caudad extension of VS-inducing area was 63.4mm (range 43.24-92.1mm). The maximum distance between the first VS and the OR was 56.1mm so that direct interaction with the retina is unlikely. Data on subjective visual perception showed that VSs occurred mainly in the upper right and left quadrants of the VF. Within the visual pathways the highest probability for origin of VSs was seen in the optic chiasm and the optic tract (22%). There is clear evidence that interaction of photon irradiation with neuronal structures distant from the eye can lead to VSs.

  1. Extraretinal induced visual sensations during IMRT of the brain.

    Directory of Open Access Journals (Sweden)

    Timo Wilhelm-Buchstab

    Full Text Available We observed visual sensations (VSs in patients undergoing intensity modulated radiotherapy (IMRT of the brain without the beam passing through ocular structures. We analyzed this phenomenon especially with regards to reproducibility, and origin.Analyzed were ten consecutive patients (aged 41-71 years with glioblastoma multiforme who received pulsed IMRT (total dose 60Gy with helical tomotherapy (TT. A megavolt-CT (MVCT was performed daily before treatment. VSs were reported and recorded using a triggered event recorder. The frequency of VSs was calculated and VSs were correlated with beam direction and couch position. Subjective patient perception was plotted on an 8x8 visual field (VF matrix. Distance to the orbital roof (OR from the first beam causing a VS was calculated from the Dicom radiation therapy data and MVCT data. During 175 treatment sessions (average 17.5 per patient 5959 VSs were recorded and analyzed. VSs occurred only during the treatment session not during the MVCTs. Plotting events over time revealed patient-specific patterns. The average cranio-caudad extension of VS-inducing area was 63.4mm (range 43.24-92.1mm. The maximum distance between the first VS and the OR was 56.1mm so that direct interaction with the retina is unlikely. Data on subjective visual perception showed that VSs occurred mainly in the upper right and left quadrants of the VF. Within the visual pathways the highest probability for origin of VSs was seen in the optic chiasm and the optic tract (22%.There is clear evidence that interaction of photon irradiation with neuronal structures distant from the eye can lead to VSs.

  2. Gene Locater

    DEFF Research Database (Denmark)

    Anwar, Muhammad Zohaib; Sehar, Anoosha; Rehman, Inayat-Ur

    2012-01-01

    software's for calculating recombination frequency is mostly limited to the range and flexibility of this type of analysis. GENE LOCATER is a fully customizable program for calculating recombination frequency, written in JAVA. Through an easy-to-use interface, GENE LOCATOR allows users a high degree...... of flexibility in calculating genetic linkage and displaying linkage group. Among other features, this software enables user to identify linkage groups with output visualized graphically. The program calculates interference and coefficient of coincidence with elevated accuracy in sample datasets. AVAILABILITY...

  3. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.

    Science.gov (United States)

    Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong

    2017-10-01

    In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society

  4. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    Science.gov (United States)

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  5. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets.

  6. Non-destructive visualization of linear explosive-induced Pyroshock using phase arrayed laser-induced shock in a space launcher composite

    International Nuclear Information System (INIS)

    Jang, Jae Kyeong; Lee, Jung Ryul

    2015-01-01

    Separation mechanism of Space launch vehicles are used in various separation systems and pyrotechnic devices. The operation of these pyrotechnic devices generates Pyroshock that can cause failures in electronic components. The prediction of high frequency structural response, especially the shock response spectrum (SRS), is important. This paper presents a non-destructive visualization and simulation of linear explosive-induced Pyroshock using phase arrayed Laser-induced shock. The proposed method includes a laser shock test based on laser beam and filtering zone conditioning to predict the SRS of Pyroshock. A ballistic test based on linear explosive and non-contact Laser Doppler Vibrometers and a nondestructive Laser shock measurement using laser excitation and several PZT sensors, are performed using a carbon composite sandwich panel. The similarity of the SRS of the conditioned laser shock to that of the real explosive Pyroshock is evaluated with the Mean Acceleration Difference. The average of MADs over the two training points was 33.64%. And, MAD at verification point was improved to 31.99%. After that, experimentally found optimal conditions are applied to any arbitrary points in laser scanning area. Finally, it is shown that linear explosive-induced real Pyroshock wave propagation can be visualized with high similarity based on the proposed laser technology. (paper)

  7. Auditory and Visual Capture during Focused Visual Attention

    Science.gov (United States)

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…

  8. Visualization of alcohol-induced rhabdomyolysis: a correlative radiotracer, histochemical, and electron-microscopic study

    International Nuclear Information System (INIS)

    Silberstein, E.B.; Bove, K.E.

    1979-01-01

    Technetium-99m diphosphonate was used to visualize the extent of alcohol-induced rhabdomyolysis and its resolution. Transient secondary hyperparathyroidism was documented. Histological and biochemical analyses of skeletal muscle obtained at biopsy 6 days postscan and 9 days after the onset of the illness did not show abnormal calcium content

  9. Elevating Endogenous GABA Levels with GAT-1 Blockade Modulates Evoked but Not Induced Responses in Human Visual Cortex

    Science.gov (United States)

    Muthukumaraswamy, Suresh D; Myers, Jim F M; Wilson, Sue J; Nutt, David J; Hamandi, Khalid; Lingford-Hughes, Anne; Singh, Krish D

    2013-01-01

    The electroencephalographic/magnetoencephalographic (EEG/MEG) signal is generated primarily by the summation of the postsynaptic currents of cortical principal cells. At a microcircuit level, these glutamatergic principal cells are reciprocally connected to GABAergic interneurons. Here we investigated the relative sensitivity of visual evoked and induced responses to altered levels of endogenous GABAergic inhibition. To do this, we pharmacologically manipulated the GABA system using tiagabine, which blocks the synaptic GABA transporter 1, and so increases endogenous GABA levels. In a single-blinded and placebo-controlled crossover study of 15 healthy participants, we administered either 15 mg of tiagabine or a placebo. We recorded whole-head MEG, while participants viewed a visual grating stimulus, before, 1, 3 and 5 h post tiagabine ingestion. Using beamformer source localization, we reconstructed responses from early visual cortices. Our results showed no change in either stimulus-induced gamma-band amplitude increases or stimulus-induced alpha amplitude decreases. However, the same data showed a 45% reduction in the evoked response component at ∼80 ms. These data demonstrate that, in early visual cortex the evoked response shows a greater sensitivity compared with induced oscillations to pharmacologically increased endogenous GABA levels. We suggest that previous studies correlating GABA concentrations as measured by magnetic resonance spectroscopy to gamma oscillation frequency may reflect underlying variations such as interneuron/inhibitory synapse density rather than functional synaptic GABA concentrations. PMID:23361120

  10. Peripapillary Retinal Nerve Fiber Layer Thickness Corresponds to Drusen Location and Extent of Visual Field Defects in Superficial and Buried Optic Disc Drusen

    DEFF Research Database (Denmark)

    Malmqvist, Lasse; Wegener, Marianne; Sander, Birgit A

    2016-01-01

    (P = 0.002) than eyes with buried ODD. There was a correlation between mean peripapillary RNFL thinning and visual field defects as measured by perimetric mean deviation (R-0.66; P = 0.0001). The most frequent visual field abnormalities were arcuate and partial arcuate defects. CONCLUSIONS...... of patients with ODD and to compare the peripapillary RNFL thickness to the extent of visual field defects and anatomic location (superficial or buried) of ODD. METHODS: Retrospective, cross sectional study. RESULTS: A total of 149 eyes of 84 ODD patients were evaluated. Sixty-five percent were female and 76......% had bilateral ODD. Of 149 eyes, 109 had superficial ODD and 40 had buried ODD. Peripapillary RNFL thinning was seen in 83.6% of eyes, where optical coherence tomography was performed (n = 61). Eyes with superficial ODD had greater mean peripapillary RNFL thinning (P ≤ 0.0001) and visual field defects...

  11. Route Network Construction with Location-Direction-Enabled Photographs

    Science.gov (United States)

    Fujita, Hideyuki; Sagara, Shota; Ohmori, Tadashi; Shintani, Takahiko

    2018-05-01

    We propose a method for constructing a geometric graph for generating routes that summarize a geographical area and also have visual continuity by using a set of location-direction-enabled photographs. A location- direction-enabled photograph is a photograph that has information about the location (position of the camera at the time of shooting) and the direction (direction of the camera at the time of shooting). Each nodes of the graph corresponds to a location-direction-enabled photograph. The location of each node is the location of the corresponding photograph, and a route on the graph corresponds to a route in the geographic area and a sequence of photographs. The proposed graph is constructed to represent characteristic spots and paths linking the spots, and it is assumed to be a kind of a spatial summarization of the area with the photographs. Therefore, we call the routes on the graph as spatial summary route. Each route on the proposed graph also has a visual continuity, which means that we can understand the spatial relationship among the continuous photographs on the route such as moving forward, backward, turning right, etc. In this study, when the changes in the shooting position and shooting direction satisfied a given threshold, the route was defined to have visual continuity. By presenting the photographs in order along the generated route, information can be presented sequentially, while maintaining visual continuity to a great extent.

  12. Segregation of Spontaneous and Training Induced Recovery from Visual Field Defects in Subacute Stroke Patients

    Directory of Open Access Journals (Sweden)

    Douwe P. Bergsma

    2017-12-01

    Full Text Available Whether rehabilitation after stroke profits from an early start is difficult to establish as the contributions of spontaneous recovery and treatment are difficult to tease apart. Here, we use a novel training design to dissociate these components for visual rehabilitation of subacute stroke patients with visual field defects such as hemianopia. Visual discrimination training was started within 6 weeks after stroke in 17 patients. Spontaneous and training-induced recoveries were distinguished by training one-half of the defect for 8 weeks, while monitoring spontaneous recovery in the other (control half of the defect. Next, trained and control regions were swapped, and training continued for another 8 weeks. The same paradigm was also applied to seven chronic patients for whom spontaneous recovery can be excluded and changes in the control half of the defect point to a spillover effect of training. In both groups, field stability was assessed during a no-intervention period. Defect reduction was significantly greater in the trained part of the defect than in the simultaneously untrained part of the defect irrespective of training onset (p = 0.001. In subacute patients, training contributed about twice as much to their defect reduction as the spontaneous recovery. Goal Attainment Scores were significantly and positively correlated with the total defect reduction (p = 0.01, percentage increase reading speed was significantly and positively correlated with the defect reduction induced by training (epoch 1: p = 0.0044; epoch 2: p = 0.023. Visual training adds significantly to the spontaneous recovery of visual field defects, both during training in the early and the chronic stroke phase. However, field recovery as a result of training in this subacute phase was as large as in the chronic phase. This suggests that patients benefited primarily of early onset training by gaining access to a larger visual field sooner.

  13. Primary visual cortex activity along the apparent-motion trace reflects illusory perception.

    Directory of Open Access Journals (Sweden)

    Lars Muckli

    2005-08-01

    Full Text Available The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1 is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.

  14. Visually induced reorientation illusions

    Science.gov (United States)

    Howard, I. P.; Hu, G.; Oman, C. M. (Principal Investigator)

    2001-01-01

    It is known that rotation of a furnished room around the roll axis of erect subjects produces an illusion of 360 degrees self-rotation in many subjects. Exposure of erect subjects to stationary tilted visual frames or rooms produces only up to 20 degrees of illusory tilt. But, in studies using static tilted rooms, subjects remained erect and the body axis was not aligned with the room. We have revealed a new class of disorientation illusions that occur in many subjects when placed in a 90 degrees or 180 degrees tilted room containing polarised objects (familiar objects with tops and bottoms). For example, supine subjects looking up at a wall of the room feel upright in an upright room and their arms feel weightless when held out from the body. We call this the levitation illusion. We measured the incidence of 90 degrees or 180 degrees reorientation illusions in erect, supine, recumbent, and inverted subjects in a room tilted 90 degrees or 180 degrees. We report that reorientation illusions depend on the displacement of the visual scene rather than of the body. However, illusions are most likely to occur when the visual and body axes are congruent. When the axes are congruent, illusions are least likely to occur when subjects are prone rather than supine, recumbent, or inverted.

  15. Sustained visual-spatial attention produces costs and benefits in response time and evoked neural activity.

    Science.gov (United States)

    Mangun, G R; Buck, L A

    1998-03-01

    This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.

  16. The effect of internal and external fields of view on visually induced motion sickness

    NARCIS (Netherlands)

    Bos, J.E.; Vries, S.C. de; Emmerik, M.L. van; Groen, E.L.

    2010-01-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between

  17. Possible role of biochemiluminescent photons for lysergic acid diethylamide (LSD)-induced phosphenes and visual hallucinations.

    Science.gov (United States)

    Kapócs, Gábor; Scholkmann, Felix; Salari, Vahid; Császár, Noémi; Szőke, Henrik; Bókkon, István

    2017-01-01

    Today, there is an increased interest in research on lysergic acid diethylamide (LSD) because it may offer new opportunities in psychotherapy under controlled settings. The more we know about how a drug works in the brain, the more opportunities there will be to exploit it in medicine. Here, based on our previously published papers and investigations, we suggest that LSD-induced visual hallucinations/phosphenes may be due to the transient enhancement of bioluminescent photons in the early retinotopic visual system in blind as well as healthy people.

  18. Visualization of Two-Phase Fluid Distribution Using Laser Induced Exciplex Fluorescence

    Science.gov (United States)

    Kim, J. U.; Darrow, J.; Schock, H.; Golding, B.; Nocera, D.; Keller, P.

    1998-03-01

    Laser-induced exciplex (excited state complex) fluorescence has been used to generate two-dimensional images of dispersed liquid and vapor phases with spectrally resolved two-color emissions. In this method, the vapor phase is tagged by the monomer fluorescence while the liquid phase is tracked by the exciplex fluorescence. A new exciplex visualization system consisting of DMA and 1,4,6-TMN in an isooctane solvent was developed.(J.U. Kim et al., Chem. Phys. Lett. 267, 323-328 (1997)) The direct ca

  19. CLINICAL PRESENTATION OF LENS INDUCED GLAUCOMA: STUDY OF EPIDEMIOLOGY, DURATION OF SYMPTOMS, INTRAOCULAR PRESSURE AND VISUAL ACUITY

    Directory of Open Access Journals (Sweden)

    Venkataratnam

    2015-10-01

    Full Text Available BACKGROUND: Lens Induced Glaucoma is a common cause of ocular morbidity. OBJECTIVES: Our study was to know the Epidemiological factors, Duration of Symptoms, Visual Acuity and Intraocular Pressure in the clinical Presentation of Lens Induced Glaucoma. MATERIALS AND METHODS : This w as a tertiary hospital based prospective study in the department of Glaucoma, Sarojini Devi Eye Hospital and Regional Institute of Ophthalmology (RIO, Osmania Medical College, Hyderabad over a period from March 2015 to August 2015. 50 Patients clinically diagnosed as Lens Induced Glaucoma (LIG were studied with the data of Age, Sex, literacy, Laterality and Rural / Urban status with the duration of symptoms, Intraocular pressure and Visual Acuity. The data was analyzed by simple statistical methods. RESULT S: 50 patients, clinically diagnosed as Lens Induced Glaucoma (LIG were studied. Age group distribution was 1(2.0% in 40 - 50yrs, 13 ( 26.0% in >50 - 60yrs, 26(52.0% in >60 - 70yrs and 10(20.0% in > 70 yrs. Sex distribution was 23(46.0% of Males and 27(54.0% of Females. Urban / Rural status was 15(30.0% of Urban and 35(70.0% of Rural. Literacy status was 7(14.0% of Literate and 43(86.0% of Illiterate. Laterality was RE in 24(48.0% and LE in 26(52.0%. Duration of the presenting symptoms before re porting to the Hospital was 12.0% in 2wks. Intraocular pressure (IOP in mm of Hg showed no case (0.0% in 20 – 40, 27(54.0% in >40 - 60 and 5(10.0% >60 wit h the Mean IOP of 42.12 mm of Hg. Visual Acuity (VA was PL +ve in 24(48.0 and HM - 3/60. CONCLUSIONS: Increasing age, female gender, rural, illiterate, and delayed reporting to the hospital after the pre senting symptoms were the common risk factors with increased Intraocular pressure and poor visual acuity in the clinical presentation of Lens induced Glaucoma.

  20. Visualizing the effect of tumor microenvironments on radiation-induced cell kinetics in multicellular spheroids consisting of HeLa cells

    International Nuclear Information System (INIS)

    Kaida, Atsushi; Miura, Masahiko

    2013-01-01

    Highlights: •We visualized radiation-induced cell kinetics in spheroids. •HeLa-Fucci cells were used for detection of cell-cycle changes. •Radiation-induced G2 arrest was prolonged in the spheroid. •The inner and outer cell fractions behaved differently. -- Abstract: In this study, we visualized the effect of tumor microenvironments on radiation-induced tumor cell kinetics. For this purpose, we utilized a multicellular spheroid model, with a diameter of ∼500 μm, consisting of HeLa cells expressing the fluorescent ubiquitination-based cell-cycle indicator (Fucci). In live spheroids, a confocal laser scanning microscope allowed us to clearly monitor cell kinetics at depths of up to 60 μm. Surprisingly, a remarkable prolongation of G2 arrest was observed in the outer region of the spheroid relative to monolayer-cultured cells. Scale, an aqueous reagent that renders tissues optically transparent, allowed visualization deeper inside spheroids. About 16 h after irradiation, a red fluorescent cell fraction, presumably a quiescent G0 cell fraction, became distinct from the outer fraction consisting of proliferating cells, most of which exhibited green fluorescence indicative of G2 arrest. Thereafter, the red cell fraction began to emit green fluorescence and remained in prolonged G2 arrest. Thus, for the first time, we visualized the prolongation of radiation-induced G2 arrest in spheroids and the differences in cell kinetics between the outer and inner fractions

  1. Glucose improves object-location binding in visual-spatial working memory.

    Science.gov (United States)

    Stollery, Brian; Christian, Leonie

    2016-02-01

    There is evidence that glucose temporarily enhances cognition and that processes dependent on the hippocampus may be particularly sensitive. As the hippocampus plays a key role in binding processes, we examined the influence of glucose on memory for object-location bindings. This study aims to study how glucose modifies performance on an object-location memory task, a task that draws heavily on hippocampal function. Thirty-one participants received 30 g glucose or placebo in a single 1-h session. After seeing between 3 and 10 objects (words or shapes) at different locations in a 9 × 9 matrix, participants attempted to immediately reproduce the display on a blank 9 × 9 matrix. Blood glucose was measured before drink ingestion, mid-way through the session, and at the end of the session. Glucose significantly improves object-location binding (d = 1.08) and location memory (d = 0.83), but not object memory (d = 0.51). Increasing working memory load impairs object memory and object-location binding, and word-location binding is more successful than shape-location binding, but the glucose improvement is robust across all difficulty manipulations. Within the glucose group, higher levels of circulating glucose are correlated with better binding memory and remembering the locations of successfully recalled objects. The glucose improvements identified are consistent with a facilitative impact on hippocampal function. The findings are discussed in the context of the relationship between cognitive processes, hippocampal function, and the implications for glucose's mode of action.

  2. Fragile visual short-term memory is an object-based and location-specific store

    NARCIS (Netherlands)

    Pinto, Y.; Sligte, I.G.; Shapiro, K.L.; Lamme, V.A.F.

    2013-01-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to

  3. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  4. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  5. Psychophysical study of the visual sun location in pictures of cloudy and twilight skies inspired by Viking navigation.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Meyer-Rochow, Victor Benno

    2005-06-01

    In the late 1960s it was hypothesized that Vikings had been able to navigate the open seas, even when the sun was occluded by clouds or below the sea horizon, by using the angle of polarization of skylight. To detect the direction of skylight polarization, they were thought to have made use of birefringent crystals, called "sun-stones," and a large part of the scientific community still firmly believe that Vikings were capable of polarimetric navigation. However, there are some critics who treat the usefulness of skylight polarization for orientation under partly cloudy or twilight conditions with extreme skepticism. One of their counterarguments has been the assumption that solar positions or solar azimuth directions could be estimated quite accurately by the naked eye, even if the sun was behind clouds or below the sea horizon. Thus under partly cloudy or twilight conditions there might have been no serious need for a polarimetric method to determine the position of the sun. The aim of our study was to test quantitatively the validity of this qualitative counterargument. In our psychophysical laboratory experiments, test subjects were confronted with numerous 180 degrees field-of-view color photographs of partly cloudy skies with the sun occluded by clouds or of twilight skies with the sun below the horizon. The task of the subjects was to guess the position or the azimuth direction of the invisible sun with the naked eye. We calculated means and standard deviations of the estimated solar positions and azimuth angles to characterize the accuracy of the visual sun location. Our data do not support the common belief that the invisible sun can be located quite accurately from the celestial brightness and/or color patterns under cloudy or twilight conditions. Although our results underestimate the accuracy of visual sun location by experienced Viking navigators, the mentioned counterargument cannot be taken seriously as a valid criticism of the theory of the alleged

  6. Visualization of Traffic Accidents

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong; Khattak, Asad

    2010-01-01

    Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.

  7. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  8. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  9. Implied Spatial Meaning and Visuospatial Bias: Conceptual Processing Influences Processing of Visual Targets and Distractors.

    Directory of Open Access Journals (Sweden)

    Davood G Gozli

    Full Text Available Concepts with implicit spatial meaning (e.g., "hat", "boots" can bias visual attention in space. This result is typically found in experiments with a single visual target per trial, which can appear at one of two locations (e.g., above vs. below. Furthermore, the interaction is typically found in the form of speeded responses to targets appearing at the compatible location (e.g., faster responses to a target above fixation, after reading "hat". It has been argued that these concept-space interactions could also result from experimentally-induced associations between the binary set of locations and the conceptual categories with upward and downward meaning. Thus, rather than reflecting a conceptually driven spatial bias, the effect could reflect a benefit for compatible cue-target sequences that occurs only after target onset. We addressed these concerns by going beyond a binary set of locations and employing a search display consisting of four items (above, below, left, and right. Within each search trial, before performing a visual search task, participants performed a conceptual task involving concepts with implicit upward or downward meaning. The search display, in addition to including a target, could also include a salient distractor. Assuming a conceptually driven visual bias, we expected to observe, first, a benefit for target processing at the compatible location and, second, an increase in the cost of the salient distractor. The findings confirmed both predictions, suggesting that concepts do indeed generate a spatial bias. Finally, results from a control experiment, without the conceptual task, suggest the presence of an axis-specific effect, in addition to the location-specific effect, suggesting that concepts might cause both location-specific and axis-specific spatial bias. Taken together, our findings provide additional support for the involvement of spatial processing in conceptual understanding.

  10. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task

    Directory of Open Access Journals (Sweden)

    Marcel eMertes

    2014-09-01

    Full Text Available Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks - salient objects in the vicinity of the goal location - can play an important role in guiding the animal’s homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee’s visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee’s visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

  11. The visual extent of an object: suppose we know the object locations

    NARCIS (Netherlands)

    Uijlings, J.R.R.; Smeulders, A.W.M.; Scha, R.J.H.

    2012-01-01

    The visual extent of an object reaches beyond the object itself. This is a long standing fact in psychology and is reflected in image retrieval techniques which aggregate statistics from the whole image in order to identify the object within. However, it is unclear to what degree and how the visual

  12. A preliminary census of engineering activities located in Sicily (Southern Italy) which may "potentially" induce seismicity

    Science.gov (United States)

    Aloisi, Marco; Briffa, Emanuela; Cannata, Andrea; Cannavò, Flavio; Gambino, Salvatore; Maiolino, Vincenza; Maugeri, Roberto; Palano, Mimmo; Privitera, Eugenio; Scaltrito, Antonio; Spampinato, Salvatore; Ursino, Andrea; Velardita, Rosanna

    2015-04-01

    The seismic events caused by human engineering activities are commonly termed as "triggered" and "induced". This class of earthquakes, though characterized by low-to-moderate magnitude, have significant social and economical implications since they occur close to the engineering activity responsible for triggering/inducing them and can be felt by the inhabitants living nearby, and may even produce damage. One of the first well-documented examples of induced seismicity was observed in 1932 in Algeria, when a shallow magnitude 3.0 earthquake occurred close to the Oued Fodda Dam. By the continuous global improvement of seismic monitoring networks, numerous other examples of human-induced earthquakes have been identified. Induced earthquakes occur at shallow depths and are related to a number of human activities, such as fluid injection under high pressure (e.g. waste-water disposal in deep wells, hydrofracturing activities in enhanced geothermal systems and oil recovery, shale-gas fracking, natural and CO2 gas storage), hydrocarbon exploitation, groundwater extraction, deep underground mining, large water impoundments and underground nuclear tests. In Italy, induced/triggered seismicity is suspected to have contributed to the disaster of the Vajont dam in 1963. Despite this suspected case and the presence in the Italian territory of a large amount of engineering activities "capable" of inducing seismicity, no extensive researches on this topic have been conducted to date. Hence, in order to improve knowledge and correctly assess the potential hazard at a specific location in the future, here we started a preliminary study on the entire range of engineering activities currently located in Sicily (Southern Italy) which may "potentially" induce seismicity. To this end, we performed: • a preliminary census of all engineering activities located in the study area by collecting all the useful information coming from available on-line catalogues; • a detailed compilation

  13. Cannabis cue-induced brain activation correlates with drug craving in limbic and visual salience regions: Preliminary results

    Science.gov (United States)

    Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.

    2013-01-01

    Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535

  14. Cue-induced craving among inhalant users: Development and preliminary validation of a visual cue paradigm.

    Science.gov (United States)

    Jain, Shobhit; Dhawan, Anju; Kumaran, S Senthil; Pattanayak, Raman Deep; Jain, Raka

    2017-12-01

    Cue-induced craving is known to be associated with a higher risk of relapse, wherein drug-specific cues become conditioned stimuli, eliciting conditioned responses. Cue-reactivity paradigm are important tools to study psychological responses and functional neuroimaging changes. However, till date, there has been no specific study or a validated paradigm for inhalant cue-induced craving research. The study aimed to develop and validate visual cue stimulus for inhalant cue-associated craving. The first step (picture selection) involved screening and careful selection of 30 cue- and 30 neutral-pictures based on their relevance for naturalistic settings. In the second step (time optimization), a random selection of ten cue-pictures each was presented for 4s, 6s, and 8s to seven adolescent male inhalant users, and pre-post craving response was compared using a Visual Analogue Scale(VAS) for each of the picture and time. In the third step (validation), craving response for each of 30 cue- and 30 neutral-pictures were analysed among 20 adolescent inhalant users. Findings revealed a significant difference in before and after craving response for the cue-pictures, but not neutral-pictures. Using ROC-curve, pictures were arranged in order of craving intensity. Finally, 20 best cue- and 20 neutral-pictures were used for the development of a 480s visual cue paradigm. This is the first study to systematically develop an inhalant cue picture paradigm which can be used as a tool to examine cue induced craving in neurobiological studies. Further research, including its further validation in larger study and diverse samples, is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effects of wing locations on wing rock induced by forebody vortices

    Directory of Open Access Journals (Sweden)

    Ma Baofeng

    2016-10-01

    Full Text Available Previous studies have shown that asymmetric vortex wakes over slender bodies exhibit a multi-vortex structure with an alternate arrangement along a body axis at high angle of attack. In this investigation, the effects of wing locations along a body axis on wing rock induced by forebody vortices was studied experimentally at a subcritical Reynolds number based on a body diameter. An artificial perturbation was added onto the nose tip to fix the orientations of forebody vortices. Particle image velocimetry was used to identify flow patterns of forebody vortices in static situations, and time histories of wing rock were obtained using a free-to-roll rig. The results show that the wing locations can affect significantly the motion patterns of wing rock owing to the variation of multi-vortex patterns of forebody vortices. As the wing locations make the forebody vortices a two-vortex pattern, the wing body exhibits regularly divergence and fixed-point motion with azimuthal variations of the tip perturbation. If a three-vortex pattern exists over the wing, however, the wing-rock patterns depend on the impact of the highest vortex and newborn vortex. As the three vortices together influence the wing flow, wing-rock patterns exhibit regularly fixed-points and limit-cycled oscillations. With the wing moving backwards, the newborn vortex becomes stronger, and wing-rock patterns become fixed-points, chaotic oscillations, and limit-cycled oscillations. With further backward movement of wings, the vortices are far away from the upper surface of wings, and the motions exhibit divergence, limit-cycled oscillations and fixed-points. For the rearmost location of the wing, the wing body exhibits stochastic oscillations and fixed-points.

  16. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate.

    Directory of Open Access Journals (Sweden)

    Rosanne L Rademaker

    Full Text Available Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.

  17. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate.

    Science.gov (United States)

    Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.

  18. Multiple spatial frequency channels in human visual perceptual memory.

    Science.gov (United States)

    Nemes, V A; Whitaker, D; Heron, J; McKeefry, D J

    2011-12-08

    Current models of short-term visual perceptual memory invoke mechanisms that are closely allied to low-level perceptual discrimination mechanisms. The purpose of this study was to investigate the extent to which human visual perceptual memory for spatial frequency is based upon multiple, spatially tuned channels similar to those found in the earliest stages of visual processing. To this end we measured how performance on a delayed spatial frequency discrimination paradigm was affected by the introduction of interfering or 'memory masking' stimuli of variable spatial frequency during the delay period. Masking stimuli were shown to induce shifts in the points of subjective equality (PSE) when their spatial frequencies were within a bandwidth of 1.2 octaves of the reference spatial frequency. When mask spatial frequencies differed by more than this value, there was no change in the PSE from baseline levels. This selective pattern of masking was observed for different spatial frequencies and demonstrates the existence of multiple, spatially tuned mechanisms in visual perceptual memory. Memory masking effects were also found to occur for horizontal separations of up to 6 deg between the masking and test stimuli and lacked any orientation selectivity. These findings add further support to the view that low-level sensory processing mechanisms form the basis for the retention of spatial frequency information in perceptual memory. However, the broad range of transfer of memory masking effects across spatial location and other dimensions indicates more long range, long duration interactions between spatial frequency channels that are likely to rely contributions from neural processes located in higher visual areas. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. P2-13: Location word Cues' Effect on Location Discrimination Task: Cross-Modal Study

    Directory of Open Access Journals (Sweden)

    Satoko Ohtsuka

    2012-10-01

    Full Text Available As is well known, participants are slower and make more errors in responding to the display color of an incongruent color word than a congruent one. This traditional stroop effect is often accounted for with relatively automatic and dominant word processing. Although the word dominance account has been widely supported, it is not clear in what extent of perceptual tasks it is valid. Here we aimed to examine whether the word dominance effect is observed in location stroop tasks and in audio-visual situations. The participants were required to press a key according to the location of visual (Experiment 1 and audio (Experiment 2 targets, left or right, as soon as possible. A cue of written (Experiments 1a and 2a or spoken (Experiments 1b and 2b location words, “left” or “right”, was presented on the left or right side of the fixation with cue lead times (CLT of 200 ms and 1200 ms. Reaction time from target presentation to key press was recorded as a dependent variable. The results were that the location validity effect was marked in within-modality but less so in cross-modality trials. The word validity effect was strong in within- but not in cross-modality trials. The CLT gave some effect of inhibition of return. So the word dominance could be less effective in location tasks and in cross-modal situations. The spatial correspondence seems to overcome the word effect.

  20. Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.

    Science.gov (United States)

    Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu

    2015-09-30

    Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.

  1. Visually induced analgesia during massage treatment in chronic back pain patients.

    Science.gov (United States)

    Löffler, A; Trojan, J; Zieglgänsberger, W; Diers, M

    2017-11-01

    Previous findings suggest that watching sites of experimental and chronic pain can exert an analgesic effect. Our present study investigates whether watching one's back during massage increases the analgesic effect of this treatment in chronic back pain patients. Twenty patients with chronic back pain were treated with a conventional massage therapy. During this treatment, patients received a real-time video feedback of their own back. Watching a neutral object, a video of another person of the same sex being massaged, a picture of the own back, and keeping one's eyes closed were used as controls. These conditions were presented in randomized order on five separate days. All conditions yielded significant decreases in habitual pain intensity. The effect of real-time video feedback of the own back on massage treatment was the strongest and differed significantly from the effect of watching a neutral object, but not from the other control conditions, which may have induced slight effects of their own. Repeated real-time video feedback may be useful during massage treatment of chronic pain. This study shows that inducing visual induced analgesia during massage treatment can be helpful in alleviating chronic pain. © 2017 European Pain Federation - EFIC®.

  2. Seeing the sound after visual loss: functional MRI in acquired auditory-visual synesthesia.

    Science.gov (United States)

    Yong, Zixin; Hsieh, Po-Jang; Milea, Dan

    2017-02-01

    Acquired auditory-visual synesthesia (AVS) is a rare neurological sign, in which specific auditory stimulation triggers visual experience. In this study, we used event-related fMRI to explore the brain regions correlated with acquired monocular sound-induced phosphenes, which occurred 2 months after unilateral visual loss due to an ischemic optic neuropathy. During the fMRI session, 1-s pure tones at various pitches were presented to the patient, who was asked to report occurrence of sound-induced phosphenes by pressing one of the two buttons (yes/no). The brain activation during phosphene-experienced trials was contrasted with non-phosphene trials and compared to results obtained in one healthy control subject who underwent the same fMRI protocol. Our results suggest, for the first time, that acquired AVS occurring after visual impairment is associated with bilateral activation of primary and secondary visual cortex, possibly due to cross-wiring between auditory and visual sensory modalities.

  3. If it's not there, where is it? Locating illusory conjunctions.

    Science.gov (United States)

    Hazeltine, R E; Prinzmetal, W; Elliott, W

    1997-02-01

    There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color.

  4. Training of ultra-fast speech comprehension induces functional reorganization of the central-visual system in late-blind humans

    Directory of Open Access Journals (Sweden)

    Susanne eDietrich

    2013-10-01

    Full Text Available Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl per seconds (s – exceeding by far the maximum performance level of untrained listeners (ca. 8 syl/s. Previous findings indicate the central-visual system to contribute to the processing of accelerated speech in blind subjects. As an extension, the present training study addresses the issue whether acquisition of ultra-fast (18 syl/s speech perception skills induces de novo central-visual hemodynamic activation in late-blind participants. Furthermore, we asked to what extent subjects with normal or residual vision can improve understanding of accelerated verbal utterances by means of specific training measures. To these ends, functional magnetic resonance imaging (fMRI was performed while subjects were listening to forward and reversed sentence utterances of moderately fast and ultra-fast syllable rates (8 or 18 syl/s prior to and after a training period of ca. six months. Four of six participants showed – independently from residual visual functions – considerable enhancement of ultra-fast speech perception (about 70 percentage points correctly repeated words whereas behavioral performance did not change in the two remaining participants. Only subjects with very low visual acuity displayed training-induced hemodynamic activation of the central-visual system. By contrast, participants with moderately impaired or even normal visual acuity showed, instead, increased right-hemispheric frontal or bilateral anterior temporal lobe responses after training. All subjects with significant training effects displayed a concomitant increase of hemodynamic activation of left-hemispheric SMA. In spite of similar behavioral performance, trained experts appear to use distinct strategies of ultra-fast speech processing depending on whether the occipital cortex is still deployed for visual processing.

  5. Luminance gradient at object borders communicates object location to the human oculomotor system.

    Science.gov (United States)

    Kilpeläinen, Markku; Georgeson, Mark A

    2018-01-25

    The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.

  6. Visual sensations induced by relativistic pions

    International Nuclear Information System (INIS)

    McNulty, P.J.; Pease, V.P.; Bond, V.P.

    1976-01-01

    Visual sensations were experienced when bursts of high-energy pions passed through the dark-adapted right eyes of three human subjects. The threshold for a visual sensation was typically 1 to 3 μrad at the retina. Data are presented to show that the mechanism is Cerenkov radiation generated within the vitreous humor. Threshold measurements agree with published optical data. A comparison is made between our observations and the light flashes observed in deep space by Apollo astronauts

  7. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  8. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  9. Wild rufous hummingbirds use local landmarks to return to rewarded locations.

    Science.gov (United States)

    Pritchard, David J; Scott, Renee D; Healy, Susan D; Hurly, Andrew T

    2016-01-01

    Animals may remember an important location with reference to one or more visual landmarks. In the laboratory, birds and mammals often preferentially use landmarks near a goal ("local landmarks") to return to that location at a later date. Although we know very little about how animals in the wild use landmarks to remember locations, mammals in the wild appear to prefer to use distant landmarks to return to rewarded locations. To examine what cues wild birds use when returning to a goal, we trained free-living hummingbirds to search for a reward at a location that was specified by three nearby visual landmarks. Following training we expanded the landmark array to test the extent that the birds relied on the local landmarks to return to the reward. During the test the hummingbirds' search was best explained by the birds having used the experimental landmarks to remember the reward location. How the birds used the landmarks was not clear and seemed to change over the course of each test. These wild hummingbirds, then, can learn locations in reference to nearby visual landmarks. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Pharmacological Mechanisms of Cortical Enhancement Induced by the Repetitive Pairing of Visual/Cholinergic Stimulation.

    Directory of Open Access Journals (Sweden)

    Jun-Il Kang

    Full Text Available Repetitive visual training paired with electrical activation of cholinergic projections to the primary visual cortex (V1 induces long-term enhancement of cortical processing in response to the visual training stimulus. To better determine the receptor subtypes mediating this effect the selective pharmacological blockade of V1 nicotinic (nAChR, M1 and M2 muscarinic (mAChR or GABAergic A (GABAAR receptors was performed during the training session and visual evoked potentials (VEPs were recorded before and after training. The training session consisted of the exposure of awake, adult rats to an orientation-specific 0.12 CPD grating paired with an electrical stimulation of the basal forebrain for a duration of 1 week for 10 minutes per day. Pharmacological agents were infused intracortically during this period. The post-training VEP amplitude was significantly increased compared to the pre-training values for the trained spatial frequency and to adjacent spatial frequencies up to 0.3 CPD, suggesting a long-term increase of V1 sensitivity. This increase was totally blocked by the nAChR antagonist as well as by an M2 mAChR subtype and GABAAR antagonist. Moreover, administration of the M2 mAChR antagonist also significantly decreased the amplitude of the control VEPs, suggesting a suppressive effect on cortical responsiveness. However, the M1 mAChR antagonist blocked the increase of the VEP amplitude only for the high spatial frequency (0.3 CPD, suggesting that M1 role was limited to the spread of the enhancement effect to a higher spatial frequency. More generally, all the drugs used did block the VEP increase at 0.3 CPD. Further, use of each of the aforementioned receptor antagonists blocked training-induced changes in gamma and beta band oscillations. These findings demonstrate that visual training coupled with cholinergic stimulation improved perceptual sensitivity by enhancing cortical responsiveness in V1. This enhancement is mainly mediated by n

  11. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex

    Directory of Open Access Journals (Sweden)

    Damien J. Mannion

    2015-06-01

    Full Text Available Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7 while we used functional magnetic resonance imaging (fMRI to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field and the image patch source location (above or below fixation; the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.

  12. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  13. Enhancement and suppression in the visual field under perceptual load.

    Science.gov (United States)

    Parks, Nathan A; Beck, Diane M; Kramer, Arthur F

    2013-01-01

    The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  14. Enhancement and Suppression in the Visual Field under Perceptual Load

    Directory of Open Access Journals (Sweden)

    Nathan A Parks

    2013-05-01

    Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  15. Crossmodal influences on visual perception

    Science.gov (United States)

    Shams, Ladan; Kim, Robyn

    2010-09-01

    Vision is generally considered the dominant sensory modality; self-contained and independent of other senses. In this article, we will present recent results that contradict this view, and show that visual perception can be strongly altered by sound and touch, and such alterations can occur even at early stages of processing, as early as primary visual cortex. We will first review the behavioral evidence demonstrating modulation of visual perception by other modalities. As extreme examples of such modulations, we will describe two visual illusions induced by sound, and a visual illusion induced by touch. Next, we will discuss studies demonstrating modulation of activity in visual areas by stimulation of other modalities, and discuss possible pathways that could underpin such interactions. This will be followed by a discussion of how crossmodal interactions can affect visual learning and adaptation. We will review several studies showing crossmodal effects on visual learning. We will conclude with a discussion of computational principles governing these crossmodal interactions, and review several recent studies that demonstrate that these interactions are statistically optimal.

  16. Conscious visual memory with minimal attention

    NARCIS (Netherlands)

    Pinto, Y.; Vandenbroucke, A.R.; Otten, M.; Sligte, I.G.; Seth, A.K.; Lamme, V.A.F.

    2017-01-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask

  17. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  18. Notice of retraction: Role of Cerebrospinal Fluid in Spaceflight-induced Ocular Changes and Visual Impairment in Astronauts.

    Science.gov (United States)

    Alperin, Noam; Bagci, Ahmet M; Oliu, Carlos J; Lee, Sang H; Lam, Byron L

    2017-10-16

    Notice of retraction: the article "Role of Cerebral Spinal Fluid in Space Flight Induced Ocular Changes and Visual Impairment in Astronauts" by Alperin et al This article has been retracted due to security concerns raised by NASA, the sponsoring agency. © RSNA, 2017.

  19. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  20. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Is theta burst stimulation applied to visual cortex able to modulate peripheral visual acuity?

    Directory of Open Access Journals (Sweden)

    Sabrina Brückner

    Full Text Available Repetitive transcranial magnetic stimulation is usually applied to visual cortex to explore the effects on cortical excitability. Most researchers therefore concentrate on changes of phosphene threshold, rarely on consequences for visual performance. Thus, we investigated peripheral visual acuity in the four quadrants of the visual field using Landolt C optotypes before and after repetitive stimulation of the visual cortex. We applied continuous and intermittend theta burst stimulation with various stimulation intensities (60%, 80%, 100%, 120% of individual phosphene threshold as well as monophasic and biphasic 1 Hz stimulation, respectively. As an important result, no serious adverse effects were observed. In particular, no seizure was induced, even with theta burst stimulation applied with 120% of individual phosphene threshold. In only one case stimulation was ceased because the subject reported intolerable pain. Baseline visual acuity decreased over sessions, indicating a continuous training effect. Unexpectedly, none of the applied transcranial magnetic stimulation protocols had an effect on performance: no change in visual acuity was found in any of the four quadrants of the visual field. Binocular viewing as well as the use of peripheral instead of foveal presentation of the stimuli might have contributed to this result. Furthermore, intraindividual variability could have masked the TMS- induced effects on visual acuity.

  2. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  3. Private Sharing of User Location over Online Social Networks

    OpenAIRE

    Freudiger, Julien; Neu, Raoul; Hubaux, Jean-Pierre

    2010-01-01

    Online social networks increasingly allow mobile users to share their location with their friends. Much to the detriment of users’ privacy, this also means that social network operators collect users’ lo- cation. Similarly, third parties can learn users’ location from localization and location visualization services. Ideally, third-parties should not be given complete access to users’ location. To protect location privacy, we design and implement a platform-independent solution for users to s...

  4. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  5. In situ visualizing the evolution of the light-induced refractive index change of Mn:KLTN crystal with digital holographic interferometry

    Directory of Open Access Journals (Sweden)

    Jinxin Han

    2015-04-01

    Full Text Available The light-induced refractive index change in Mn:KLTN crystal, illuminated by focused light sheet, is visualized in situ and quantified by digital holographic interferometry. By numerically retrieving a series of sequential phase maps from recording digital holograms, the spatial distribution of the induced refractive index change can be visualized and estimated readily. This technique enables the observation of the temporal evolution of the refractive index change under different recording situations such as writing laser power, applied voltage, and temperature, and the photoconductivity of Mn:KLTN crystal can be calculated as well, the experimental results are in good agreement with the theory. The research results suggest that the presented method is successful and feasible.

  6. Cholinergic pairing with visual activation results in long-term enhancement of visual evoked potentials.

    Directory of Open Access Journals (Sweden)

    Jun Il Kang

    Full Text Available Acetylcholine (ACh contributes to learning processes by modulating cortical plasticity in terms of intensity of neuronal activity and selectivity properties of cortical neurons. However, it is not known if ACh induces long term effects within the primary visual cortex (V1 that could sustain visual learning mechanisms. In the present study we analyzed visual evoked potentials (VEPs in V1 of rats during a 4-8 h period after coupling visual stimulation to an intracortical injection of ACh analog carbachol or stimulation of basal forebrain. To clarify the action of ACh on VEP activity in V1, we individually pre-injected muscarinic (scopolamine, nicotinic (mecamylamine, alpha7 (methyllycaconitine, and NMDA (CPP receptor antagonists before carbachol infusion. Stimulation of the cholinergic system paired with visual stimulation significantly increased VEP amplitude (56% during a 6 h period. Pre-treatment with scopolamine, mecamylamine and CPP completely abolished this long-term enhancement, while alpha7 inhibition induced an instant increase of VEP amplitude. This suggests a role of ACh in facilitating visual stimuli responsiveness through mechanisms comparable to LTP which involve nicotinic and muscarinic receptors with an interaction of NMDA transmission in the visual cortex.

  7. Mimicking cataract-induced visual dysfunction by means of protein denaturation in egg albumen

    Science.gov (United States)

    Mandracchia, B.; Finizio, A.; Ferraro, P.

    2016-03-01

    As the world's population ages, cataract-induced visual dysfunction and blindness is on the increase. This is a significant global problem. The most common symptoms of cataracts are glared and blurred vision. Usually, people with cataract have trouble seeing and reading at distance or in low light and also their color perception is altered. Furthermore, cataract is a sneaky disease as it is usually a very slow but progressive process, which creates adaptation so that patients find it difficult to recognize. All this can be very difficult to explain, so we built and tested an optical device to help doctors giving comprehensive answers to the patients' symptoms. This device allows visualizing how cataract impairs vision mimicking the optical degradation of the crystalline related cataracts. This can be a valuable optical tool for medical education as well as to provide a method to illustrate the patients how cataract progression process will affect their vision.

  8. Brain activation and deactivation during location and color working memory tasks in 11-13-year-old children.

    Science.gov (United States)

    Vuontela, Virve; Steenari, Maija-Riikka; Aronen, Eeva T; Korvenoja, Antti; Aronen, Hannu J; Carlson, Synnöve

    2009-02-01

    Using functional magnetic resonance imaging (fMRI) and n-back tasks we investigated whether, in 11-13-year-old children, spatial (location) and nonspatial (color) information is differentially processed during visual attention (0-back) and working memory (WM) (2-back) tasks and whether such cognitive task performance, compared to a resting state, results in regional deactivation. The location 0-back task, compared to the color 0-back task, activated segregated areas in the frontal, parietal and occipital cortices whereas no differentially activated voxels were obtained when location and color 2-back tasks were directly contrasted. Several midline cortical areas were less active during 0- and 2-back task performance than resting state. The task-induced deactivation increased with task difficulty as demonstrated by larger deactivation during 2-back than 0-back tasks. The results suggest that, in 11-13-year-old children, the visual attentional network is differently recruited by spatial and nonspatial information processing, but the functional organization of cortical activation in WM in this age group is not based on the type of information processed. Furthermore, 11-13-year-old children exhibited a similar pattern of cortical deactivation that has been reported in adults during cognitive task performance compared to a resting state.

  9. Attention biases visual activity in visual short-term memory.

    Science.gov (United States)

    Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina

    2014-07-01

    In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.

  10. Radio Transmitters and Tower Locations, Layer includes all towers identified visually and include cellular and other communication towers., Published in 2008, 1:1200 (1in=100ft) scale, Noble County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Radio Transmitters and Tower Locations dataset current as of 2008. Layer includes all towers identified visually and include cellular and other communication towers..

  11. Decoding Illusory Self-location from Activity in the Human Hippocampus

    Directory of Open Access Journals (Sweden)

    Arvid eGuterstam

    2015-07-01

    Full Text Available Decades of research have demonstrated a role for the hippocampus in spatial navigation and episodic and spatial memory. However, empirical evidence linking hippocampal activity to the perceptual experience of being physically located at a particular place in the environment is lacking. In this study, we used a multisensory out-of-body illusion to perceptually ‘teleport’ six healthy participants between two different locations in the scanner room during high-resolution functional magnetic resonance imaging (fMRI. The participants were fitted with MRI-compatible head-mounted displays that changed their first-person visual perspective to that of a pair of cameras placed in one of two corners of the scanner room. To elicit the illusion of being physically located in this position, we delivered synchronous visuo-tactile stimulation in the form of an object moving towards the cameras coupled with touches applied to the participant’s chest. Asynchronous visuo-tactile stimulation did not induce the illusion and served as a control condition. We found that illusory self-location could be successfully decoded from patterns of activity in the hippocampus in all of the participants in the synchronous (P0.05. At the group-level, the decoding accuracy was significantly higher in the synchronous than in the asynchronous condition (P=0.012. These findings associate hippocampal activity with the perceived location of the bodily self in space, which suggests that the human hippocampus is involved not only in spatial navigation and memory but also in the construction of our sense of bodily self-location.

  12. Semantic elaboration in auditory and visual spatial memory.

    Science.gov (United States)

    Taevs, Meghan; Dahmani, Louisa; Zatorre, Robert J; Bohbot, Véronique D

    2010-01-01

    The aim of this study was to investigate the hypothesis that semantic information facilitates auditory and visual spatial learning and memory. An auditory spatial task was administered, whereby healthy participants were placed in the center of a semi-circle that contained an array of speakers where the locations of nameable and non-nameable sounds were learned. In the visual spatial task, locations of pictures of abstract art intermixed with nameable objects were learned by presenting these items in specific locations on a computer screen. Participants took part in both the auditory and visual spatial tasks, which were counterbalanced for order and were learned at the same rate. Results showed that learning and memory for the spatial locations of nameable sounds and pictures was significantly better than for non-nameable stimuli. Interestingly, there was a cross-modal learning effect such that the auditory task facilitated learning of the visual task and vice versa. In conclusion, our results support the hypotheses that the semantic representation of items, as well as the presentation of items in different modalities, facilitate spatial learning and memory.

  13. The influence of visual motion on interceptive actions and perception.

    Science.gov (United States)

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Attentional Capture to a Singleton Distractor Degrades Visual Marking in Visual Search

    Directory of Open Access Journals (Sweden)

    Kenji Yamauchi

    2017-05-01

    Full Text Available Visual search is easier after observing some distractors in advance; it is as if the previewed distractors were excluded from the search. This effect is referred to as the preview benefit, and a memory template that visually marks the old locations of the distractors is thought to help in prioritizing the locations of newly presented items. One remaining question is whether the presence of a conspicuous item during the sequential shift of attention within the new items reduces this preview benefit. To address this issue, we combined the above preview search and a conventional visual search paradigm using a singleton distractor and examined whether the search performance was affected by the presence of the singleton. The results showed that the slope of reaction time as a function of set size became steeper in the presence of a singleton, indicating that the singleton distractor reduced the preview benefit. Furthermore, this degradation effect was positively correlated with the degree of conventional attentional capture to a singleton measured in a separate experiment with simultaneous search. These findings suggest that the mechanism of visual marking shares common attentional resources with the search process.

  15. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  16. Working memory contributes to the encoding of object location associations: Support for a 3-part model of object location memory.

    Science.gov (United States)

    Gillis, M Meredith; Garcia, Sarah; Hampstead, Benjamin M

    2016-09-15

    A recent model by Postma and colleagues posits that the encoding of object location associations (OLAs) requires the coordination of several cognitive processes mediated by ventral (object perception) and dorsal (spatial perception) visual pathways as well as the hippocampus (feature binding) [1]. Within this model, frontoparietal network recruitment is believed to contribute to both the spatial processing and working memory task demands. The current study used functional magnetic resonance imaging (fMRI) to test each step of this model in 15 participants who encoded OLAs and performed standard n-back tasks. As expected, object processing resulted in activation of the ventral visual stream. Object in location processing resulted in activation of both the ventral and dorsal visual streams as well as a lateral frontoparietal network. This condition was also the only one to result in medial temporal lobe activation, supporting its role in associative learning. A conjunction analysis revealed areas of shared activation between the working memory and object in location phase within the lateral frontoparietal network, anterior insula, and basal ganglia; consistent with prior working memory literature. Overall, findings support Postma and colleague's model and provide clear evidence for the role of working memory during OLA encoding. Published by Elsevier B.V.

  17. Visual straight-ahead preference in saccadic eye movements.

    Science.gov (United States)

    Camors, Damien; Trotter, Yves; Pouget, Pierre; Gilardeau, Sophie; Durand, Jean-Baptiste

    2016-03-15

    Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades.

  18. Effect of laser induced plasma ignition timing and location on Diesel spray combustion

    International Nuclear Information System (INIS)

    Pastor, José V.; García-Oliver, José M.; García, Antonio; Pinotti, Mattia

    2017-01-01

    Highlights: • Laser plasma ignition is applied to a direct injection Diesel spray, compared with auto-ignition. • Critical local fuel/air ratio for LIP provoked ignition is obtained. • The LIP system is able to stabilize Diesel combustion compared to auto-ignition cases. • Varying LIP position along spray axis directly affects Ignition-delay. • Premixed combustion is reduced both by varying position and delay of the LIP ignition system. - Abstract: An experimental study about the influence of the local conditions at the ignition location on combustion development of a direct injection spray is carried out in an optical engine. A laser induced plasma ignition system has been used to force the spray ignition, allowing comparison of combustion’s evolution and stability with the case of conventional autoignition on the Diesel fuel in terms of ignition delay, rate of heat release, spray penetration and soot location evolution. The local equivalence ratio variation along the spray axis during the injection process was determined with a 1D spray model, previously calibrated and validated. Upper equivalence ratios limits for the ignition event of a direct injected Diesel spray, both in terms of ignition success possibilities and stability of the phenomena, could been determined thanks to application of the laser plasma ignition system. In all laser plasma induced ignition cases, heat release was found to be higher than for the autoignition reference cases, and it was found to be linked to a decrease of ignition delay, with the premixed peak in the rate of heat release curve progressively disappearing as the ignition delay time gets shorter. Ignition delay has been analyzed as a function of the laser position, too. It was found that ignition delay increases for plasma positions closer to the nozzle, indicating that the amount of energy introduced by the laser induced plasma is not the only parameter affecting combustion initiation, but local equivalence ratio

  19. Updating visual memory across eye movements for ocular and arm motor control.

    Science.gov (United States)

    Thompson, Aidan A; Henriques, Denise Y P

    2008-11-01

    Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.

  20. Early visual evoked potentials are modulated by eye position in humans induced by whole body rotations

    Directory of Open Access Journals (Sweden)

    Petit Laurent

    2004-09-01

    Full Text Available Abstract Background To reach and grasp an object in space on the basis of its image cast on the retina requires different coordinate transformations that take into account gaze and limb positioning. Eye position in the orbit influences the image's conversion from retinotopic (eye-centered coordinates to an egocentric frame necessary for guiding action. Neuroimaging studies have revealed eye position-dependent activity in extrastriate visual, parietal and frontal areas that is along the visuo-motor pathway. At the earliest vision stage, the role of the primary visual area (V1 in this process remains unclear. We used an experimental design based on pattern-onset visual evoked potentials (VEP recordings to study the effect of eye position on V1 activity in humans. Results We showed that the amplitude of the initial C1 component of VEP, acknowledged to originate in V1, was modulated by the eye position. We also established that putative spontaneous small saccades related to eccentric fixation, as well as retinal disparity cannot explain the effects of changing C1 amplitude of VEP in the present study. Conclusions The present modulation of the early component of VEP suggests an eye position-dependent activity of the human primary visual area. Our findings also evidence that cortical processes combine information about the position of the stimulus on the retinae with information about the location of the eyes in their orbit as early as the stage of primary visual area.

  1. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  2. Images of illusory motion in primary visual cortex

    DEFF Research Database (Denmark)

    Larsen, A.; Madsen, Kristoffer Hougaard; Lund, T.E.

    2006-01-01

    Illusory motion can be generated by successively flashing a stationary visual stimulus in two spatial locations separated by several degrees of visual angle. In appropriate conditions, the apparent motion is indistinguishable from real motion: The observer experiences a luminous object traversing...... a continuous path from one stimulus location to the other through intervening positions where no physical stimuli exist. The phenomenon has been extensively investigated for nearly a century but little is known about its neurophysiological foundation. Here we present images of activations in the primary visual...

  3. In Vivo Evaluation of the Visual Pathway in Streptozotocin-Induced Diabetes by Diffusion Tensor MRI and Contrast Enhanced MRI.

    Directory of Open Access Journals (Sweden)

    Swarupa Kancherla

    Full Text Available Visual function has been shown to deteriorate prior to the onset of retinopathy in some diabetic patients and experimental animal models. This suggests the involvement of the brain's visual system in the early stages of diabetes. In this study, we tested this hypothesis by examining the integrity of the visual pathway in a diabetic rat model using in vivo multi-modal magnetic resonance imaging (MRI. Ten-week-old Sprague-Dawley rats were divided into an experimental diabetic group by intraperitoneal injection of 65 mg/kg streptozotocin in 0.01 M citric acid, and a sham control group by intraperitoneal injection of citric acid only. One month later, diffusion tensor MRI (DTI was performed to examine the white matter integrity in the brain, followed by chromium-enhanced MRI of retinal integrity and manganese-enhanced MRI of anterograde manganese transport along the visual pathway. Prior to MRI experiments, the streptozotocin-induced diabetic rats showed significantly smaller weight gain and higher blood glucose level than the control rats. DTI revealed significantly lower fractional anisotropy and higher radial diffusivity in the prechiasmatic optic nerve of the diabetic rats compared to the control rats. No apparent difference was observed in the axial diffusivity of the optic nerve, the chromium enhancement in the retina, or the manganese enhancement in the lateral geniculate nucleus and superior colliculus between groups. Our results suggest that streptozotocin-induced diabetes leads to early injury in the optic nerve when no substantial change in retinal integrity or anterograde transport along the visual pathways was observed in MRI using contrast agent enhancement. DTI may be a useful tool for detecting and monitoring early pathophysiological changes in the visual system of experimental diabetes non-invasively.

  4. Changes in brain activation induced by visual stimulus during and after propofol conscious sedation: a functional MRI study.

    Science.gov (United States)

    Shinohe, Yutaka; Higuchi, Satomi; Sasaki, Makoto; Sato, Masahito; Noda, Mamoru; Joh, Shigeharu; Satoh, Kenichi

    2016-12-07

    Conscious sedation with propofol sometimes causes amnesia while keeping the patient awake. However, it remains unknown how propofol compromises the memory function. Therefore, we investigated the changes in brain activation induced by visual stimulation during and after conscious sedation with propofol using serial functional MRI. Healthy volunteers received a target-controlled infusion of propofol, and underwent functional MRI scans with a block-design paradigm of visual stimulus before, during, and after conscious sedation. Random-effect model analyses were performed using Statistical Parametric Mapping software. Among the areas showing significant activation in response to the visual stimulus, the visual cortex and fusiform gyrus were significantly suppressed in the sedation session and tended to recover in the early-recovery session of ∼20 min (Psedation and early-recovery sessions (Psedation with propofol may cause prolonged suppression of the activation of memory-related structures, such as the hippocampus, during the early-recovery period, which may lead to transient amnesia.

  5. Assisted Living Facilities, Locations of Assisted Living Facilities identifed visually and placed on the Medical Multi-Hazard Mitigation layer., Published in 2006, 1:1200 (1in=100ft) scale, Noble County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Assisted Living Facilities dataset current as of 2006. Locations of Assisted Living Facilities identifed visually and placed on the Medical Multi-Hazard Mitigation...

  6. SEM method for direct visual tracking of nanoscale morphological changes of platinum based electrocatalysts on fixed locations upon electrochemical or thermal treatments

    Energy Technology Data Exchange (ETDEWEB)

    Zorko, Milena [National Institute of Chemistry, Hajdrihova 19, Ljubljana (Slovenia); Centre of Excellence for Low-Carbon Technologies, Hajdrihova 19, Ljubljana (Slovenia); Jozinović, Barbara [Centre of Excellence for Low-Carbon Technologies, Hajdrihova 19, Ljubljana (Slovenia); Bele, Marjan [National Institute of Chemistry, Hajdrihova 19, Ljubljana (Slovenia); Centre of Excellence for Low-Carbon Technologies, Hajdrihova 19, Ljubljana (Slovenia); Hodnik, Nejc, E-mail: nejc.hodnik@ki.si [National Institute of Chemistry, Hajdrihova 19, Ljubljana (Slovenia); Gaberšček, Miran [National Institute of Chemistry, Hajdrihova 19, Ljubljana (Slovenia); Centre of Excellence for Low-Carbon Technologies, Hajdrihova 19, Ljubljana (Slovenia)

    2014-05-01

    A general method for tracking morphological surface changes on a nanometer scale with scanning electron microscopy (SEM) is introduced. We exemplify the usefulness of the method by showing consecutive SEM images of an identical location before and after the electrochemical and thermal treatments of platinum-based nanoparticles deposited on a high surface area carbon. Observations reveal an insight into platinum based catalyst degradation occurring during potential cycling treatment. The presence of chloride clearly increases the rate of degradation. At these conditions the dominant degradation mechanism seems to be the platinum dissolution with some subsequent redeposition on the top of the catalyst film. By contrast, at the temperature of 60 °C, under potentiostatic conditions some carbon corrosion and particle aggregation was observed. Temperature treatment simulating the annealing step of the synthesis reveals sintering of small platinum based composite aggregates into uniform spherical particles. The method provides a direct proof of induced surface phenomena occurring on a chosen location without the usual statistical uncertainty in usual, random SEM observations across relatively large surface areas. - Highlights: • A new SEM method for observations of identical locations. • Nanoscale morphological consecutive changes on identical locations. • Electrochemical and thermal treatments on platinum based nanoparticles. • Potential cycling induces platinum dissolution with redeposition on top of the film. • At 1.4 V vs. RHE and 60 °C carbon corrosion and particle aggregation is observed.

  7. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  8. Visualization Design Environment

    Energy Technology Data Exchange (ETDEWEB)

    Pomplun, A.R.; Templet, G.J.; Jortner, J.N.; Friesen, J.A.; Schwegel, J.; Hughes, K.R.

    1999-02-01

    Improvements in the performance and capabilities of computer software and hardware system, combined with advances in Internet technologies, have spurred innovative developments in the area of modeling, simulation and visualization. These developments combine to make it possible to create an environment where engineers can design, prototype, analyze, and visualize components in virtual space, saving the time and expenses incurred during numerous design and prototyping iterations. The Visualization Design Centers located at Sandia National Laboratories are facilities built specifically to promote the ''design by team'' concept. This report focuses on designing, developing and deploying this environment by detailing the design of the facility, software infrastructure and hardware systems that comprise this new visualization design environment and describes case studies that document successful application of this environment.

  9. Reduction in spontaneous firing of mouse excitatory layer 4 cortical neurons following visual classical conditioning

    Science.gov (United States)

    Bekisz, Marek; Shendye, Ninad; Raciborska, Ida; Wróbel, Andrzej; Waleszczyk, Wioletta J.

    2017-08-01

    The process of learning induces plastic changes in neuronal network of the brain. Our earlier studies on mice showed that classical conditioning in which monocular visual stimulation was paired with an electric shock to the tail enhanced GABA immunoreactivity within layer 4 of the monocular part of the primary visual cortex (V1), contralaterally to the stimulated eye. In the present experiment we investigated whether the same classical conditioning paradigm induces changes of neuronal excitability in this cortical area. Two experimental groups were used: mice that underwent 7-day visual classical conditioning and controls. Patch-clamp whole-cell recordings were performed from ex vivo slices of mouse V1. The slices were perfused with the modified artificial cerebrospinal fluid, the composition of which better mimics the brain interstitial fluid in situ and induces spontaneous activity. The neuronal excitability was characterized by measuring the frequency of spontaneous action potentials. We found that layer 4 star pyramidal cells located in the monocular representation of the "trained" eye in V1 had lower frequency of spontaneous activity in comparison with neurons from the same cortical region of control animals. Weaker spontaneous firing indicates decreased general excitability of star pyramidal neurons within layer 4 of the monocular representation of the "trained" eye in V1. Such effect could result from enhanced inhibitory processes accompanying learning in this cortical area.

  10. Designing and Evaluation of Reliability and Validity of Visual Cue-Induced Craving Assessment Task for Methamphetamine Smokers

    Directory of Open Access Journals (Sweden)

    Hamed Ekhtiari

    2010-08-01

    Full Text Available A B S T R A C TIntroduction: Craving to methamphetamine is a significant health concern and exposure to methamphetamine cues in laboratory can induce craving. In this study, a task designing procedure for evaluating methamphetamine cue-induced craving in laboratory conditions is examined. Methods: First a series of visual cues which could induce craving was identified by 5 discussion sessions between expert clinicians and 10 methamphetamine smokers. Cues were categorized in 4 main clusters and photos were taken for each cue in studio, then 60 most evocative photos were selected and 10 neutral photos were added. In this phase, 50 subjects with methamphetamine dependence, had exposure to cues and rated craving intensity induced by the 72 cues (60 active evocative photos + 10 neutral photos on self report Visual Analogue Scale (ranging from 0-100. In this way, 50 photos with high levels of evocative potency (CICT 50 and 10 photos with the most evocative potency (CICT 10 were obtained and subsequently, the task was designed. Results: The task reliability (internal consistency was measured by Cronbach’s alpha which was 91% for (CICT 50 and 71% for (CICT 10. The most craving induced was reported for category Drug use procedure (66.27±30.32 and least report for category Cues associated with drug use (31.38±32.96. Difference in cue-induced craving in (CICT 50 and (CICT 10 were not associated with age, education, income, marital status, employment and sexual activity in the past 30 days prior to study entry. Family living condition was marginally correlated with higher scores in (CICT 50. Age of onset for (opioids, cocaine and methamphetamine was negatively correlated with (CICT 50 and (CICT 10 and age of first opiate use was negatively correlated with (CICT 50. Discussion: Cue-induced craving for methamphetamine may be reliably measured by tasks designed in laboratory and designed assessment tasks can be used in cue reactivity paradigm, and

  11. UV reactor flow visualization and mixing quantification using three-dimensional laser-induced fluorescence.

    Science.gov (United States)

    Gandhi, Varun; Roberts, Philip J W; Stoesser, Thorsten; Wright, Harold; Kim, Jae-Hong

    2011-07-01

    Three-dimensional laser-induced fluorescence (3DLIF) was applied to visualize and quantitatively analyze mixing in a lab-scale UV reactor consisting of one lamp sleeve placed perpendicular to flow. The recirculation zone and the von Karman vortex shedding that commonly occur in flows around bluff bodies were successfully visualized. Multiple flow paths were analyzed by injecting the dye at various heights with respect to the lamp sleeve. A major difference in these pathways was the amount of dye that traveled close to the sleeve, i.e., a zone of higher residence time and higher UV exposure. Paths away from the center height had higher velocities and hence minimal influence by the presence of sleeve. Approach length was also characterized in order to increase the probability of microbes entering the region around the UV lamp. The 3DLIF technique developed in this study is expected to provide new insight on UV dose delivery useful for the design and optimization of UV reactors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A survey of visually induced symptoms and associated factors in spectators of three dimensional stereoscopic movies

    Directory of Open Access Journals (Sweden)

    Solimini Angelo G

    2012-09-01

    Full Text Available Abstract Background The increasing popularity of commercial movies showing three dimensional (3D computer generated images has raised concern about image safety and possible side effects on population health. This study aims to (1 quantify the occurrence of visually induced symptoms suffered by the spectators during and after viewing a commercial 3D movie and (2 to assess individual and environmental factors associated to those symptoms. Methods A cross-sectional survey was carried out using a paper based, self administered questionnaire. The questionnaire includes individual and movie characteristics and selected visually induced symptoms (tired eyes, double vision, headache, dizziness, nausea and palpitations. Symptoms were queried at 3 different times: during, right after and after 2 hours from the movie. Results We collected 953 questionnaires. In our sample, 539 (60.4% individuals reported 1 or more symptoms during the movie, 392 (43.2% right after and 139 (15.3% at 2 hours from the movie. The most frequently reported symptoms were tired eyes (during the movie by 34.8%, right after by 24.0%, after 2 hours by 5.7% of individuals and headache (during the movie by 13.7%, right after by 16.8%, after 2 hours by 8.3% of individuals. Individual history for frequent headache was associated with tired eyes (OR = 1.34, 95%CI = 1.01-1.79, double vision (OR = 1.96; 95%CI = 1.13-3.41, headache (OR = 2.09; 95%CI = 1.41-3.10 during the movie and of headache after the movie (OR = 1.64; 95%CI = 1.16-2.32. Individual susceptibility to car sickness, dizziness, anxiety level, movie show time, animation 3D movie were also associated to several other symptoms. Conclusions The high occurrence of visually induced symptoms resulting from this survey suggests the need of raising public awareness on possible discomfort that susceptible individuals may suffer during and after the vision of 3D movies.

  13. Laser/fluorescent dye flow visualization technique developed for system component thermal hydraulic studies

    International Nuclear Information System (INIS)

    Oras, J.J.; Kasza, K.E.

    1988-01-01

    A novel laser flow visualization technique is presented together with examples of its use in visualizing complex flow patterns and plans for its further development. This technique has been successfully used to study (1) the flow in a horizontal pipe subject to temperature transients, to view the formation and breakup of thermally stratified flow and to determine instantaneous velocity distributions in the same flow at various axial locations; (2) the discharge of a stratified pipe flow into a plenum exhibiting a periodic vortex pattern; and (3) the thermal-buoyancy-induced flow channeling on the shell side of a heat exchanger with glass tubes and shell. This application of the technique to heat exchangers is unique. The flow patterns deep within a large tube bundle can be studied under steady or transient conditions. This laser flow visualization technique constitutes a very powerful tool for studying single or multiphase flows in complex thermal system components

  14. Self-reflection Orients Visual Attention Downward.

    Science.gov (United States)

    Liu, Yi; Tong, Yu; Li, Hong

    2017-01-01

    Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., "I am above others"). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context.

  15. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  16. Communication: Visualization and spectroscopy of defects induced by dehydrogenation in individual silicon nanocrystals

    Science.gov (United States)

    Kislitsyn, Dmitry A.; Mills, Jon M.; Kocevski, Vancho; Chiu, Sheng-Kuei; DeBenedetti, William J. I.; Gervasi, Christian F.; Taber, Benjamen N.; Rosenfield, Ariel E.; Eriksson, Olle; Rusz, Ján; Goforth, Andrea M.; Nazin, George V.

    2016-06-01

    We present results of a scanning tunneling spectroscopy (STS) study of the impact of dehydrogenation on the electronic structures of hydrogen-passivated silicon nanocrystals (SiNCs) supported on the Au(111) surface. Gradual dehydrogenation is achieved by injecting high-energy electrons into individual SiNCs, which results, initially, in reduction of the electronic bandgap, and eventually produces midgap electronic states. We use theoretical calculations to show that the STS spectra of midgap states are consistent with the presence of silicon dangling bonds, which are found in different charge states. Our calculations also suggest that the observed initial reduction of the electronic bandgap is attributable to the SiNC surface reconstruction induced by conversion of surface dihydrides to monohydrides due to hydrogen desorption. Our results thus provide the first visualization of the SiNC electronic structure evolution induced by dehydrogenation and provide direct evidence for the existence of diverse dangling bond states on the SiNC surfaces.

  17. Visualizing and quantifying dose distribution in a UV reactor using three-dimensional laser-induced fluorescence.

    Science.gov (United States)

    Gandhi, Varun N; Roberts, Philip J W; Kim, Jae-Hong

    2012-12-18

    Evaluating the performance of typical water treatment UV reactors is challenging due to the complexity in assessing spatial and temporal variation of UV fluence, resulting from highly unsteady, turbulent nature of flow and variation in UV intensity. In this study, three-dimensional laser-induced fluorescence (3DLIF) was applied to visualize and quantitatively analyze a lab-scale UV reactor consisting of one lamp sleeve placed perpendicular to flow. Mapping the spatial and temporal fluence delivery and MS2 inactivation revealed the highest local fluence in the wake zone due to longer residence time and higher UV exposure, while the lowest local fluence occurred in a region near the walls due to short-circuiting flow and lower UV fluence rate. Comparing the tracer based decomposition between hydrodynamics and IT revealed similar coherent structures showing the dependency of fluence delivery on the reactor flow. The location of tracer injection, varying the height and upstream distance from the lamp center, was found to significantly affect the UV fluence received by the tracer. A Lagrangian-based analysis was also employed to predict the fluence along specific paths of travel, which agreed with the experiments. The 3DLIF technique developed in this study provides new insight on dose delivery that fluctuates both spatially and temporally and is expected to aid design and optimization of UV reactors as well as validate computational fluid dynamics models that are widely used to simulate UV reactor performances.

  18. Multisensory memory for object identity and location

    NARCIS (Netherlands)

    Erp, J.B.F. van; Philippi, T.G.; Werkhoven, P.J.

    2014-01-01

    Researchers have reported that audiovisual object presentation improves memory encoding of object identity in comparison to either auditory or visual object presentation. However, multisensory memory effects on retrieval, on object location, and of other multisensory combinations are yet unknown. We

  19. Direct visualization of solute locations in laboratory ice samples

    Directory of Open Access Journals (Sweden)

    T. Hullar

    2016-09-01

    Full Text Available Many important chemical reactions occur in polar snow, where solutes may be present in several reservoirs, including at the air–ice interface and in liquid-like regions within the ice matrix. Some recent laboratory studies suggest chemical reaction rates may differ in these two reservoirs. While investigations have examined where solutes are found in natural snow and ice, few studies have examined either solute locations in laboratory samples or the possible factors controlling solute segregation. To address this, we used micro-computed tomography (microCT to examine solute locations in ice samples prepared from either aqueous cesium chloride (CsCl or rose bengal solutions that were frozen using several different methods. Samples frozen in a laboratory freezer had the largest liquid-like inclusions and air bubbles, while samples frozen in a custom freeze chamber had somewhat smaller air bubbles and inclusions; in contrast, samples frozen in liquid nitrogen showed much smaller concentrated inclusions and air bubbles, only slightly larger than the resolution limit of our images (∼ 2 µm. Freezing solutions in plastic vs. glass vials had significant impacts on the sample structure, perhaps because the poor heat conductivity of plastic vials changes how heat is removed from the sample as it cools. Similarly, the choice of solute had a significant impact on sample structure, with rose bengal solutions yielding smaller inclusions and air bubbles compared to CsCl solutions frozen using the same method. Additional experiments using higher-resolution imaging of an ice sample show that CsCl moves in a thermal gradient, supporting the idea that the solutes in ice are present in mobile liquid-like regions. Our work shows that the structure of laboratory ice samples, including the location of solutes, is sensitive to the freezing method, sample container, and solute characteristics, requiring careful experimental design and interpretation of results.

  20. Visual-induced expectations modulate auditory cortical responses

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2015-02-01

    Full Text Available Active sensing has important consequences on multisensory processing (Schroeder et al. 2010. Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient colour changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the where and the when of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG while maintaining the position of their eyes on the left, right, or centre of the screen. Participants counted colour changes of the fixation cross while neglecting sounds which could be presented to the left, right or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants’ attention directed to visual inputs. Second, colour changes elicited robust modulations of auditory cortex responses (when prediction seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of when a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that where predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.

  1. Do rufous hummingbirds (Selasphorus rufus) use visual beacons?

    Science.gov (United States)

    Hurly, T Andrew; Franz, Simone; Healy, Susan D

    2010-03-01

    Animals are often assumed to use highly conspicuous features of a goal to head directly to that goal ('beaconing'). In the field it is generally assumed that flowers serve as beacons to guide pollinators. Artificial hummingbird feeders are coloured red to serve a similar function. However, anecdotal reports suggest that hummingbirds return to feeder locations in the absence of the feeder (and thus the beacon). Here we test these reports for the first time in the field, using the natural territories of hummingbirds and manipulating flowers on a scale that is ecologically relevant to the birds. We compared the predictions from two distinct hypotheses as to how hummingbirds might use the visual features of rewards: the distant beacon hypothesis and the local cue hypothesis. In two field experiments, we found no evidence that rufous hummingbirds used a distant visual beacon to guide them to a rewarded location. In no case did birds abandon their approach to the goal location from a distance; rather they demonstrated remarkable accuracy of navigation by approaching to within about 70 cm of a rewarded flower's original location. Proximity varied depending on the size of the training flower: birds flew closer to a previously rewarded location if it had been previously signalled with a small beacon. Additionally, when provided with a beacon at a new location, birds did not fly directly to the new beacon. Taken together, we believe these data demonstrate that these hummingbirds depend little on visual characteristics to beacon to rewarded locations, but rather that they encode surrounding landmarks in order to reach the goal and then use the visual features of the goal as confirmation that they have arrived at the correct location.

  2. The use of ambient audio to increase safety and immersion in location-based games

    Science.gov (United States)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a

  3. Academic and Workplace-related Visual Stresses Induce Detectable Deterioration Of Performance, Measured By Basketball Trajectories and Astigmatism Impacting Athletes Or Students In Military Pilot Training.

    Science.gov (United States)

    Mc Leod, Roger D.

    2004-03-01

    Separate military establishments across the globe can confirm that a high percentage of their prospective pilots-in-training are no longer visually fit to continue the flight training portion of their programs once their academic coursework is completed. I maintain that the visual stress induced by those intensive protocols can damage the visual feedback mechanism of any healthy and dynamic system beyond its usual and ordinary ability to self-correct minor visual loss of acuity. This deficiency seems to be detectable among collegiate and university athletes by direct observation of the height of the trajectory arc of a basketball's flight. As a particular athlete becomes increasingly stressed by academic constraints requiring long periods of concentrated reading under highly static angular convergence of the eyes, along with unfavorable illumination and viewing conditions, eyesight does deteriorate. I maintain that induced astigmatism is a primary culprit because of the evidence of that basketball's trajectory! See the next papers!

  4. Visual electrophysiology in children

    Directory of Open Access Journals (Sweden)

    Jelka Brecelj

    2005-10-01

    Full Text Available Background: Electrophysiological assessment of vision in children helps to recognise abnormal development of the visual system when it is still susceptible to medication and eventual correction. Visual electrophysiology provides information about the function of the retina (retinal pigment epithelium, cone and rod receptors, bipolar, amacrine, and ganglion cells, optic nerve, chiasmal and postchiasmal visual pathway, and visual cortex.Methods: Electroretinograms (ERG and visual evoked potentials (VEP are recorded non-invasively; in infants are recorded simultaneously ERG with skin electrodes, while in older children separately ERG with HK loop electrode in accordance with ISCEV (International Society for Clinical Electrophysiology of Vision recommendations.Results: Clinical and electrophysiological changes in children with nystagmus, Leber’s congenital amaurosis, achromatopsia, congenital stationary night blindness, progressive retinal dystrophies, optic nerve hypoplasia, albinism, achiasmia, optic neuritis and visual pathway tumours are presented.Conclusions: Electrophysiological tests can help to indicate the nature and the location of dysfunction in unclear ophthalmological and/or neurological cases.

  5. Visualization of ultrasound induced cavitation bubbles using the synchrotron x-ray Analyzer Based Imaging technique

    International Nuclear Information System (INIS)

    Izadifar, Zahra; Izadifar, Mohammad; Izadifar, Zohreh; Chapman, Dean; Belev, George

    2014-01-01

    Observing cavitation bubbles deep within tissue is very difficult. The development of a method for probing cavitation, irrespective of its location in tissues, would improve the efficiency and application of ultrasound in the clinic. A synchrotron x-ray imaging technique, which is capable of detecting cavitation bubbles induced in water by a sonochemistry system, is reported here; this could possibly be extended to the study of therapeutic ultrasound in tissues. The two different x-ray imaging techniques of Analyzer Based Imaging (ABI) and phase contrast imaging (PCI) were examined in order to detect ultrasound induced cavitation bubbles. Cavitation was not observed by PCI, however it was detectable with ABI. Acoustic cavitation was imaged at six different acoustic power levels and six different locations through the acoustic beam in water at a fixed power level. The results indicate the potential utility of this technique for cavitation studies in tissues, but it is time consuming. This may be improved by optimizing the imaging method. (paper)

  6. Visualization of ultrasound induced cavitation bubbles using the synchrotron x-ray Analyzer Based Imaging technique.

    Science.gov (United States)

    Izadifar, Zahra; Belev, George; Izadifar, Mohammad; Izadifar, Zohreh; Chapman, Dean

    2014-12-07

    Observing cavitation bubbles deep within tissue is very difficult. The development of a method for probing cavitation, irrespective of its location in tissues, would improve the efficiency and application of ultrasound in the clinic. A synchrotron x-ray imaging technique, which is capable of detecting cavitation bubbles induced in water by a sonochemistry system, is reported here; this could possibly be extended to the study of therapeutic ultrasound in tissues. The two different x-ray imaging techniques of Analyzer Based Imaging (ABI) and phase contrast imaging (PCI) were examined in order to detect ultrasound induced cavitation bubbles. Cavitation was not observed by PCI, however it was detectable with ABI. Acoustic cavitation was imaged at six different acoustic power levels and six different locations through the acoustic beam in water at a fixed power level. The results indicate the potential utility of this technique for cavitation studies in tissues, but it is time consuming. This may be improved by optimizing the imaging method.

  7. Enhanced stimulus-induced gamma activity in humans during propofol-induced sedation.

    Directory of Open Access Journals (Sweden)

    Neeraj Saxena

    Full Text Available Stimulus-induced gamma oscillations in the 30-80 Hz range have been implicated in a wide number of functions including visual processing, memory and attention. While occipital gamma-band oscillations can be pharmacologically modified in animal preparations, pharmacological modulation of stimulus-induced visual gamma oscillations has yet to be demonstrated in non-invasive human recordings. Here, in fifteen healthy humans volunteers, we probed the effects of the GABAA agonist and sedative propofol on stimulus-related gamma activity recorded with magnetoencephalography, using a simple visual grating stimulus designed to elicit gamma oscillations in the primary visual cortex. During propofol sedation as compared to the normal awake state, a significant 60% increase in stimulus-induced gamma amplitude was seen together with a 94% enhancement of stimulus-induced alpha suppression and a simultaneous reduction in the amplitude of the pattern-onset evoked response. These data demonstrate, that propofol-induced sedation is accompanied by increased stimulus-induced gamma activity providing a potential window into mechanisms of gamma-oscillation generation in humans.

  8. Radiographic Estimation of the Location and Size of kidneys in ...

    African Journals Online (AJOL)

    Keywords: Radiography, Location, Kidney size, Local dogs. The kidneys of dogs and cats are located retroperitoneally (Bjorling, 1993). Visualization of the kidneys on radiographs is possible due to the contrast provided by the perirenal fat (Grandage, 1975). However, this perirenal fat rarely covers the ventral surface of the ...

  9. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    Science.gov (United States)

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  10. GABAA receptors in visual and auditory cortex and neural activity changes during basic visual stimulation

    Directory of Open Access Journals (Sweden)

    Pengmin eQin

    2012-12-01

    Full Text Available Recent imaging studies have demonstrated that levels of resting GABA in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC and eyes open (EO state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: An EO and EC block design, allowing the modelling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicts the change of functional connectivity between visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  11. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    Science.gov (United States)

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  12. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    Science.gov (United States)

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  13. Brightness and transparency in the early visual cortex.

    Science.gov (United States)

    Salmela, Viljami R; Vanni, Simo

    2013-06-24

    Several psychophysical studies have shown that transparency can have drastic effects on brightness and lightness. However, the neural processes generating these effects have remained unresolved. Several lines of evidence suggest that the early visual cortex is important for brightness perception. While single cell recordings suggest that surface brightness is represented in the primary visual cortex, the results of functional magnetic resonance imaging (fMRI) studies have been discrepant. In addition, the location of the neural representation of transparency is not yet known. We investigated whether the fMRI responses in areas V1, V2, and V3 correlate with brightness and transparency. To dissociate the blood oxygen level-dependent (BOLD) response to brightness from the response to local border contrast and mean luminance, we used variants of White's brightness illusion, both opaque and transparent, in which luminance increments and decrements cancel each other out. The stimuli consisted of a target surface and a surround. The surround luminance was always sinusoidally modulated at 0.5 Hz to induce brightness modulation to the target. The target luminance was constant or modulated in counterphase to null brightness modulation. The mean signal changes were calculated from the voxels in V1, V2, and V3 corresponding to the retinotopic location of the target surface. The BOLD responses were significantly stronger for modulating brightness than for stimuli with constant brightness. In addition, the responses were stronger for transparent than for opaque stimuli, but there was more individual variation. No interaction between brightness and transparency was found. The results show that the early visual areas V1-V3 are sensitive to surface brightness and transparency and suggest that brightness and transparency are represented separately.

  14. Computational Modeling of Cephalad Fluid Shift for Application to Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2013-01-01

    An improved understanding of spaceflight-induced ocular pathology, including the loss of visual acuity, globe flattening, optic disk edema and distension of the optic nerve and optic nerve sheath, is of keen interest to space medicine. Cephalad fluid shift causes a profoundly altered distribution of fluid within the compartments of the head and body, and may indirectly generate phenomena that are biomechanically relevant to visual function, such as choroidal engorgement, compromised drainage of blood and cerebrospinal fluid (CSF), and altered translaminar pressure gradient posterior to the eye. The experimental body of evidence with respect to the consequences of fluid shift has not yet been able to provide a definitive picture of the sequence of events. On earth, elevated intracranial pressure (ICP) is associated with idiopathic intracranial hypertension (IIH), which can produce ocular pathologies that look similar to those seen in some astronauts returning from long-duration flight. However, the clinically observable features of the Visual Impairment and Intracranial Pressure (VIIP) syndrome in space and IIH on earth are not entirely consistent. Moreover, there are at present no experimental measurements of ICP in microgravity. By its very nature, physiological measurements in spaceflight are sparse, and the space environment does not lend itself to well-controlled experiments. In the absence of such data, numerical modeling can play a role in the investigation of biomechanical causal pathways that are suspected of involvement in VIIP. In this work, we describe the conceptual framework for modeling the altered compartmental fluid distribution that represents an equilibrium fluid distribution resulting from the loss of hydrostatic pressure gradient.

  15. Location priority for non-formal early childhood education school based on promethee method and map visualization

    Science.gov (United States)

    Ayu Nurul Handayani, Hemas; Waspada, Indra

    2018-05-01

    Non-formal Early Childhood Education (non-formal ECE) is an education that is held for children under 4 years old. The implementation in District of Banyumas, Non-formal ECE is monitored by The District Government of Banyumas and helped by Sanggar Kegiatan Belajar (SKB) Purwokerto as one of the organizer of Non-formal Education. The government itself has a program for distributing ECE to all villages in Indonesia. However, The location to construct the ECE school in several years ahead is not arranged yet. Therefore, for supporting that program, a decision support system is made to give some recommendation villages for constructing The ECE building. The data are projected based on Brown’s Double Exponential Smoothing Method and utilizing Preference Ranking Organization Method for Enrichment Evaluation (Promethee) to generate priority order. As the recommendations system, it generates map visualization which is colored according to the priority level of sub-district and village area. The system was tested with black box testing, Promethee testing, and usability testing. The results showed that the system functionality and Promethee algorithm were working properly, and the user was satisfied.

  16. First comparative approach to touchscreen-based visual object-location paired-associates learning in humans (Homo sapiens) and a nonhuman primate (Microcebus murinus).

    Science.gov (United States)

    Schmidtke, Daniel; Ammersdörfer, Sandra; Joly, Marine; Zimmermann, Elke

    2018-05-10

    A recent study suggests that a specific, touchscreen-based task on visual object-location paired-associates learning (PAL), the so-called Different PAL (dPAL) task, allows effective translation from animal models to humans. Here, we adapted the task to a nonhuman primate (NHP), the gray mouse lemur, and provide first evidence for the successful comparative application of the task to humans and NHPs. Young human adults reach the learning criterion after considerably less sessions (one order of magnitude) than young, adult NHPs, which is likely due to faster and voluntary rejection of ineffective learning strategies in humans and almost immediate rule generalization. At criterion, however, all human subjects solved the task by either applying a visuospatial rule or, more rarely, by memorizing all possible stimulus combinations and responding correctly based on global visual information. An error-profile analysis in humans and NHPs suggests that successful learning in NHPs is comparably based either on the formation of visuospatial associative links or on more reflexive, visually guided stimulus-response learning. The classification in the NHPs is further supported by an analysis of the individual response latencies, which are considerably higher in NHPs classified as spatial learners. Our results, therefore, support the high translational potential of the standardized, touchscreen-based dPAL task by providing first empirical and comparable evidence for two different cognitive processes underlying dPAL performance in primates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    National Research Council Canada - National Science Library

    Marjanovic, Matthew; Scassellati, Brian; Williamson, Matthew

    2006-01-01

    .... This task requires systems for learning saccade to visual targets, generating smooth arm trajectories, locating the arm in the visual field, and learning the map between gaze direction and correct...

  18. The role of landmarks in the development of object location memory

    NARCIS (Netherlands)

    Bullens, Jessie

    2008-01-01

    In order to locate objects in an enclosed environment animals and humans use visual and non-visual distance and direction cues. In the present study, we were interested in children’s ability to relocate an object on the basis of self-motion cues and local and distal color cues for orientation. Five

  19. Visual and surface plasmon resonance sensor for zirconium based on zirconium-induced aggregation of adenosine triphosphate-stabilized gold nanoparticles

    International Nuclear Information System (INIS)

    Qi, Wenjing; Zhao, Jianming; Zhang, Wei; Liu, Zhongyuan; Xu, Min; Anjum, Saima; Majeed, Saadat; Xu, Guobao

    2013-01-01

    Graphical abstract: Visual and surface plasmon resonance (SPR) sensor for Zr(IV) has been developed for the first time based on Zr(IV)-induced change of SPR absorption spectra of ATP-stabilized AuNP solutions. -- Highlights: •Visual and SPR absorption Zr 4+ sensors have been developed for the first time. •The high affinity between Zr 4+ and ATP makes sensor highly sensitive and selective. •A fast response to Zr 4+ within 4 min. -- Abstract: Owing to its high affinity with phosphate, Zr(IV) can induce the aggregation of adenosine 5′-triphosphate (ATP)-stabilized AuNPs, leading to the change of surface plasmon resonance (SPR) absorption spectra and color of ATP-stabilized AuNP solutions. Based on these phenomena, visual and SPR sensors for Zr(IV) have been developed for the first time. The A 660 nm /A 518 nm values of ATP-stabilized AuNPs in SPR absorption spectra increase linearly with the concentrations of Zr(IV) from 0.5 μM to 100 μM (r = 0.9971) with a detection limit of 95 nM. A visual Zr(IV) detection is achieved with a detection limit of 30 μM. The sensor shows excellent selectivity against other metal ions, such as Cu 2+ , Fe 3+ , Cd 2+ , and Pb 2+ . The recoveries for the detection of 5 μM, 10 μM, 25 μM and 75 μM Zr(IV) in lake water samples are 96.0%, 97.0%, 95.6% and 102.4%, respectively. The recoveries of the proposed SPR method are comparable with those of ICP-OES method

  20. EEG based time and frequency dynamics analysis of visually induced motion sickness (VIMS).

    Science.gov (United States)

    Arsalan Naqvi, Syed Ali; Badruddin, Nasreen; Jatoi, Munsif Ali; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2015-12-01

    3D movies are attracting the viewers as they can see the objects flying out of the screen. However, many viewers have reported various problems which are usually faced after watching 3D movies. These problems include visual fatigue, eye strain, headaches, dizziness, blurred vision or collectively may be termed as visually induced motion sickness (VIMS). This research focuses on the comparison between 3D passive technology with a conventional 2D technology to find that whether 3D is causing trouble in the viewers or not. For this purpose, an experiment was designed in which participants were randomly assigned to watch 2D or a 3D movie. The movie was specially designed to induce VIMS. The movie was shown for the duration of 10 min to every participant. The electroencephalogram (EEG) data was recorded throughout the session. At the end of the session, participants rated their feelings using simulator sickness questionnaire (SSQ). The SSQ data was analyzed and the ratings of 2D and 3D participants were compared statistically by using a two tailed t test. From the SSQ results, it was found that participants watching 3D movies reported significantly higher symptoms of VIMS (p value EEG data was analyzed by using MATLAB and topographic plots are created from the data. A significant difference has been observed in the frontal-theta power which increases with the passage of time in 2D condition while decreases with time in 3D condition. Also, a decrease in beta power has been found in the temporal lobe of 3D group. Therefore, it is concluded that there are negative effects of 3D movies causing significant changes in the brain activity in terms of band powers. This condition leads to produce symptoms of VIMS in the viewers.

  1. Early Limits on the Verbal Updating of an Object's Location

    Science.gov (United States)

    Ganea, Patricia A.; Harris, Paul L.

    2013-01-01

    Recent research has shown that by 30 months of age, children can successfully update their representation of an absent object's location on the basis of new verbal information, whereas 23-month-olds often return to the object's prior location. The current results show that this updating failure persisted even when (a) toddlers received visual and…

  2. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  3. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    Science.gov (United States)

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  4. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  5. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  6. Analysis of relationship among visual evoked potential, oscillatory potential and visual acuity under stimulated weightlessness

    Directory of Open Access Journals (Sweden)

    Jun Zhao

    2013-05-01

    Full Text Available AIM: To observe the influence of head-down tilt simulated weightlessness on visual evoked potential(VEP, oscillatory potentials(OPsand visual acuity, and analyse the relationship among them. METHODS: Head-down tilt for -6° was adopted in 14 healthy volunteers. Distant visual acuity, near visual acuity, VEP and OPs were recorded before, two days and five days after trial. The record procedure of OPs followed the ISCEV standard for full-field clinical electroretinography(2008 update. RESULTS: Significant differences were detected in the amplitude of P100 waves and ∑OPs among various time points(P<0.05. But no relationship was observed among VEP, OPs and visual acuity. CONCLUSION: Head-down tilt simulated weightlessness induce the rearrange of blood of the whole body including eyes, which can make the change of visual electrophysiology but not visual acuity.

  7. Geometric Optimization for Non-Thrombogenicity of a Centrifugal Blood Pump through Flow Visualization

    Science.gov (United States)

    Toyoda, Masahiro; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi; Tsutsui, Tatsuo; Sankai, Yoshiyuki

    A monopivot centrifugal blood pump, whose impeller is supported with a pivot bearing and a passive magnetic bearing, is under development for implantable artificial heart. The hemolysis level is less than that of commercial centrifugal pumps and the pump size is as small as 160 mL in volume. To solve a problem of thrombus caused by fluid dynamics, flow visualization experiments and animal experiments have been undertaken. For flow visualization a three-fold scale-up model, high-speed video system, and particle tracking velocimetry software were used. To verify non-thrombogenicity one-week animal experiments were conducted with sheep. The initially observed thrombus around the pivot was removed through unifying the separate washout holes to a small centered hole to induce high shear around the pivot. It was found that the thrombus contours corresponded to the shear rate of 300s-1 for red thrombus and 1300-1700s-1 for white thrombus, respectively. Thus flow visualization technique was found to be a useful tool to predict thrombus location.

  8. Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine

    International Nuclear Information System (INIS)

    Ploder, O.; Wagner, A.; Enislidis, G.; Ewers, R.

    1995-01-01

    In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment - the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. (orig.) [de

  9. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  10. [Associative Learning between Orientation and Color in Early Visual Areas].

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2017-08-01

    Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.

  11. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  12. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    Science.gov (United States)

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  13. Pleasant music as a countermeasure against visually induced motion sickness.

    Science.gov (United States)

    Keshavarz, Behrang; Hecht, Heiko

    2014-05-01

    Visually induced motion sickness (VIMS) is a well-known side-effect in virtual environments or simulators. However, effective behavioral countermeasures against VIMS are still sparse. In this study, we tested whether music can reduce the severity of VIMS. Ninety-three volunteers were immersed in an approximately 14-minute-long video taken during a bicycle ride. Participants were randomly assigned to one of four experimental groups, either including relaxing music, neutral music, stressful music, or no music. Sickness scores were collected using the Fast Motion Sickness Scale and the Simulator Sickness Questionnaire. Results showed an overall trend for relaxing music to reduce the severity of VIMS. When factoring in the subjective pleasantness of the music, a significant reduction of VIMS occurred only when the presented music was perceived as pleasant, regardless of the music type. In addition, we found a gender effect with women reporting more sickness than men. We assume that the presentation of pleasant music can be an effective, low-cost, and easy-to-administer method to reduce VIMS. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. A Visualization Method for Corrosion Damage on Aluminum Plates Using an Nd:YAG Pulsed Laser Scanning System.

    Science.gov (United States)

    Lee, Inbok; Zhang, Aoqi; Lee, Changgil; Park, Seunghee

    2016-12-16

    This paper proposes a non-contact nondestructive evaluation (NDE) technique that uses laser-induced ultrasonic waves to visualize corrosion damage in aluminum alloy plate structures. The non-contact, pulsed-laser ultrasonic measurement system generates ultrasonic waves using a galvanometer-based Q-switched Nd:YAG laser and measures the ultrasonic waves using a piezoelectric (PZT) sensor. During scanning, a wavefield can be acquired by changing the excitation location of the laser point and measuring waves using the PZT sensor. The corrosion damage can be detected in the wavefield snapshots using the scattering characteristics of the waves that encounter corrosion. The structural damage is visualized by calculating the logarithmic values of the root mean square (RMS), with a weighting parameter to compensate for the attenuation caused by geometrical spreading and dispersion of the waves. An intact specimen is used to conduct a comparison with corrosion at different depths and sizes in other specimens. Both sides of the plate are scanned with the same scanning area to observe the effect of the location where corrosion has formed. The results show that the damage can be successfully visualized for almost all cases using the RMS-based functions, whether it formed on the front or back side. Also, the system is confirmed to have distinguished corroded areas at different depths.

  15. Graded Neuronal Modulations Related to Visual Spatial Attention

    Science.gov (United States)

    Maunsell, John H. R.

    2016-01-01

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary “spotlight” of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. SIGNIFICANCE STATEMENT Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused (“cued” vs “uncued”). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally

  16. Graded Neuronal Modulations Related to Visual Spatial Attention.

    Science.gov (United States)

    Mayo, J Patrick; Maunsell, John H R

    2016-05-11

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary "spotlight" of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused ("cued" vs "uncued"). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral

  17. Transient cardio-respiratory responses to visually induced tilt illusions

    Science.gov (United States)

    Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.

    2000-01-01

    Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.

  18. Implied motion language can influence visual spatial memory

    NARCIS (Netherlands)

    Vinson, David; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it

  19. Small Aperture Telescope Observations of Co-located Geostationary Satellites

    Science.gov (United States)

    Scott, R.; Wallace, B.

    As geostationary orbit (GEO) continues to be populated, satellite operators are increasing usage of co-location techniques to maximize usage of fewer GEO longitude slots. Co-location is an orbital formation strategy where two or more geostationary satellites reside within one GEO stationkeeping box. The separation strategy used to prevent collision between the co-located satellites generally uses eccentricity (radial separation) and inclination (latitude separation) vector offsets. This causes the satellites to move in relative motion ellipses about each other as the relative longitude drift between the satellites is near zero. Typical separations between the satellites varies from 1 to 100 kilometers. When co-located satellites are observed by optical ground based space surveillance sensors the participants appear to be separated by a few minutes of arc or less in angular extent. Under certain viewing geometries, these satellites appear to visually conjunct even though the satellites are, in fact, well separated spatially. In situations where one of the co-located satellites is more optically reflective than the other, the reflected sunglint from the more reflective satellite can overwhelm the other. This less frequently encountered issue causes the less reflective satellite to be glint masked in the glare of the other. This paper focuses on space surveillance observations on co-located Canadian satellites using a small optical telescope operated by Defence R&D Canada - Ottawa. The two above mentioned problems (cross tagging and glint masking) are investigated and we quantify the results for Canadian operated geostationary satellites. The performance of two line element sets when making in-frame CCD image correlation between the co-located satellites is also examined. Relative visual magnitudes between the co-located members are also inspected and quantified to determine the susceptibility of automated telescopes to glint masking of co-located satellite members.

  20. A different outlook on time: visual and auditory month names elicit different mental vantage points for a time-space synaesthete.

    Science.gov (United States)

    Jarick, Michelle; Dixon, Mike J; Stewart, Mark T; Maxwell, Emily C; Smilek, Daniel

    2009-01-01

    Synaesthesia is a fascinating condition whereby individuals report extraordinary experiences when presented with ordinary stimuli. Here we examined an individual (L) who experiences time units (i.e., months of the year and hours of the day) as occupying specific spatial locations (January is 30 degrees to the left of midline). This form of time-space synaesthesia has been recently investigated by Smilek et al. (2007) who demonstrated that synaesthetic time-space associations are highly consistent, occur regardless of intention, and can direct spatial attention. We extended this work by showing that for the synaesthete L, her time-space vantage point changes depending on whether the time units are seen or heard. For example, when L sees the word JANUARY, she reports experiencing January on her left side, however when she hears the word "January" she experiences the month on her right side. L's subjective reports were validated using a spatial cueing paradigm. The names of months were centrally presented followed by targets on the left or right. L was faster at detecting targets in validly cued locations relative to invalidly cued locations both for visually presented cues (January orients attention to the left) and for aurally presented cues (January orients attention to the right). We replicated this difference in visual and aural cueing effects using hour of the day. Our findings support previous research showing that time-space synaesthesia can bias visual spatial attention, and further suggest that for this synaesthete, time-space associations differ depending on whether they are visually or aurally induced.

  1. Visualizations of Travel Time Performance Based on Vehicle Reidentification Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley Ernest [National Renewable Energy Lab, 15013 Denver West Parkway, Golden, CO 80401; Sharifi, Elham [Center for Advanced Transportation Technology, University of Maryland, College Park, Technology Ventures Building, Suite 2200, 5000 College Avenue, College Park, MD 20742; Day, Christopher M. [Joint Transportation Research Program, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906; Bullock, Darcy M. [Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906

    2017-01-01

    This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from various time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.

  2. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  3. Visual updating across saccades by working memory integration

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing, by shifting externally-stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  4. Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories.

    Directory of Open Access Journals (Sweden)

    Suzanne A E Nooij

    Full Text Available This study investigated the role of vection (i.e., a visually induced sense of self-motion, optokinetic nystagmus (OKN, and inadvertent head movements in visually induced motion sickness (VIMS, evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent, and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent. In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48. Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.

  5. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  6. Object formation in visual working memory: Evidence from object-based attention.

    Science.gov (United States)

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Improving visual perception through neurofeedback

    Science.gov (United States)

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  8. Memory for Complex Visual Objects but Not for Allocentric Locations during the First Year of Life

    Science.gov (United States)

    Dupierrix, Eve; Hillairet de Boisferon, Anne; Barbeau, Emmanuel; Pascalis, Olivier

    2015-01-01

    Although human infants demonstrate early competence to retain visual information, memory capacities during infancy remain largely undocumented. In three experiments, we used a Visual Paired Comparison (VPC) task to examine abilities to encode identity (Experiment 1) and spatial properties (Experiments 2a and 2b) of unfamiliar complex visual…

  9. Evidence for unlimited capacity processing of simple features in visual cortex.

    Science.gov (United States)

    White, Alex L; Runeson, Erik; Palmer, John; Ernst, Zachary R; Boynton, Geoffrey M

    2017-06-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level-dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity.

  10. Humans tend to walk in circles as directed by memorized visual locations at large distances

    OpenAIRE

    Consolo, Patricia; Holanda, Humberto C.; Fukusima, Sérgio S.

    2014-01-01

    Human veering while walking blindfolded or walking straight without any visual cues has been widely studied over the last 100 years, but the results are still controversial. The present study attempted to describe and understand the human ability to maintain the direction of a trajectory while walking without visual or audio cues with reference to a proposed mathematical model and using data collected by a global positioning system (GPS). Fifteen right-handed people of both genders, aged 18-3...

  11. Visual Stimuli Induce Waves of Electrical Activity in Turtle Cortex

    Science.gov (United States)

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-07-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (scale differences in neuronal timing are present and persistent during visual processing.

  12. Impact induced damage assessment by means of Lamb wave image processing

    Science.gov (United States)

    Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw

    2018-03-01

    The aim of this research is an analysis of full wavefield Lamb wave interaction with impact-induced damage at various impact energies in order to find out the limitation of the wavenumber adaptive image filtering method. In other words, the relation between impact energy and damage detectability will be shown. A numerical model based on the time domain spectral element method is used for modeling of Lamb wave propagation and interaction with barely visible impact damage in a carbon-epoxy laminate. Numerical studies are followed by experimental research on the same material with an impact damage induced by various energy and also a Teflon insert simulating delamination. Wavenumber adaptive image filtering and signal processing are used for damage visualization and assessment for both numerical and experimental full wavefield data. It is shown that it is possible to visualize and assess the impact damage location, size and to some extent severity by using the proposed technique.

  13. Visualization of odor-induced neuronal activity by immediate early gene expression

    Directory of Open Access Journals (Sweden)

    Bepari Asim K

    2012-11-01

    Full Text Available Abstract Background Sensitive detection of sensory-evoked neuronal activation is a key to mechanistic understanding of brain functions. Since immediate early genes (IEGs are readily induced in the brain by environmental changes, tracing IEG expression provides a convenient tool to identify brain activity. In this study we used in situ hybridization to detect odor-evoked induction of ten IEGs in the mouse olfactory system. We then analyzed IEG induction in the cyclic nucleotide-gated channel subunit A2 (Cnga2-null mice to visualize residual neuronal activity following odorant exposure since CNGA2 is a key component of the olfactory signal transduction pathway in the main olfactory system. Results We observed rapid induction of as many as ten IEGs in the mouse olfactory bulb (OB after olfactory stimulation by a non-biological odorant amyl acetate. A robust increase in expression of several IEGs like c-fos and Egr1 was evident in the glomerular layer, the mitral/tufted cell layer and the granule cell layer. Additionally, the neuronal IEG Npas4 showed steep induction from a very low basal expression level predominantly in the granule cell layer. In Cnga2-null mice, which are usually anosmic and sexually unresponsive, glomerular activation was insignificant in response to either ambient odorants or female stimuli. However, a subtle induction of c-fos took place in the OB of a few Cnga2-mutants which exhibited sexual arousal. Interestingly, very strong glomerular activation was observed in the OB of Cnga2-null male mice after stimulation with either the neutral odor amyl acetate or the predator odor 2, 3, 5-trimethyl-3-thiazoline (TMT. Conclusions This study shows for the first time that in vivo olfactory stimulation can robustly induce the neuronal IEG Npas4 in the mouse OB and confirms the odor-evoked induction of a number of IEGs. As shown in previous studies, our results indicate that a CNGA2-independent signaling pathway(s may activate the

  14. Mental Imagery and Visual Working Memory

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024

  15. Mental imagery and visual working memory.

    Directory of Open Access Journals (Sweden)

    Rebecca Keogh

    Full Text Available Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  16. Mental imagery and visual working memory.

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  17. Reconfigurable Auditory-Visual Display

    Science.gov (United States)

    Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)

    2008-01-01

    System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.

  18. Top-down contextual knowledge guides visual attention in infancy.

    Science.gov (United States)

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  19. Crossmodal plasticity in auditory, visual and multisensory cortical areas following noise-induced hearing loss in adulthood.

    Science.gov (United States)

    Schormans, Ashley L; Typlt, Marei; Allman, Brian L

    2017-01-01

    Complete or partial hearing loss results in an increased responsiveness of neurons in the core auditory cortex of numerous species to visual and/or tactile stimuli (i.e., crossmodal plasticity). At present, however, it remains uncertain how adult-onset partial hearing loss affects higher-order cortical areas that normally integrate audiovisual information. To that end, extracellular electrophysiological recordings were performed under anesthesia in noise-exposed rats two weeks post-exposure (0.8-20 kHz at 120 dB SPL for 2 h) and age-matched controls to characterize the nature and extent of crossmodal plasticity in the dorsal auditory cortex (AuD), an area outside of the auditory core, as well as in the neighboring lateral extrastriate visual cortex (V2L), an area known to contribute to audiovisual processing. Computer-generated auditory (noise burst), visual (light flash) and combined audiovisual stimuli were delivered, and the associated spiking activity was used to determine the response profile of each neuron sampled (i.e., unisensory, subthreshold multisensory or bimodal). In both the AuD cortex and the multisensory zone of the V2L cortex, the maximum firing rates were unchanged following noise exposure, and there was a relative increase in the proportion of neurons responsive to visual stimuli, with a concomitant decrease in the number of neurons that were solely responsive to auditory stimuli despite adjusting the sound intensity to account for each rat's hearing threshold. These neighboring cortical areas differed, however, in how noise-induced hearing loss affected audiovisual processing; the total proportion of multisensory neurons significantly decreased in the V2L cortex (control 38.8 ± 3.3% vs. noise-exposed 27.1 ± 3.4%), and dramatically increased in the AuD cortex (control 23.9 ± 3.3% vs. noise-exposed 49.8 ± 6.1%). Thus, following noise exposure, the cortical area showing the greatest relative degree of multisensory convergence

  20. The effect of internal and external fields of view on visually induced motion sickness.

    Science.gov (United States)

    Bos, Jelte E; de Vries, Sjoerd C; van Emmerik, Martijn L; Groen, Eric L

    2010-07-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between iFOV and eFOV would lead to sickness. To that end we used a computer game environment with different iFOV and eFOV settings, and found the opposite effect. We speculate that the relative large differences between iFOV and eFOV used in this experiment caused the discrepancy, as may be explained by assuming an observer model controlling body motion. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  2. Connecting Music and Place: Exploring Library Collection Data Using Geo-visualizations

    Directory of Open Access Journals (Sweden)

    Carolyn Doi

    2017-06-01

    Full Text Available Abstract Objectives – This project had two stated objectives: 1 to compare the location and concentration of Saskatchewan-based large ensembles (bands, orchestras, choirs within the province, with the intention to draw conclusions about the history of community-based musical activity within the province; and 2 to enable location-based browsing of Saskatchewan music materials through an interactive search interface. Methods – Data was harvested from MARC metadata found in the library catalogue for a special collection of Saskatchewan music at the University of Saskatchewan. Microsoft Excel and OpenRefine were used to screen, clean, and enhance the dataset. Data was imported into ArcGIS software, where it was plotted using a geo-visualization showing location and concentrations of musical activity by large ensembles within the province. The geo-visualization also allows users to filter results based on the ensemble type (band, orchestra, or choir. Results – The geo-visualization shows that albums from large community ensembles appear across the province, in cities and towns of all sizes. The ensembles are concentrated in the southern portion of the province and there is a correlation between population density and ensemble location. Choral ensembles are more prevalent than bands and orchestras, and appear more widely across the province, whereas bands and orchestras are concentrated around larger centres. Conclusions – Library catalogue data contains unique information for research based on special collections, though additional cleaning is needed. Using geospatial visualizations to navigate collections allows for more intuitive searching by location, and allow users to compare facets. While not appropriate for all kinds of searching, maps are useful for browsing and for location-based searches. Information is displayed in a visual way that allows users to explore and connect with other platforms for more information.

  3. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Science.gov (United States)

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear

  4. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Directory of Open Access Journals (Sweden)

    Marika T Leving

    Full Text Available It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process.17 Participants received visual feedback-based practice (feedback group and 15 participants received regular practice (natural learning group. Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block and optimize it in the prescribed direction (2nd 4-min block. To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability.The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group.These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not

  5. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    Science.gov (United States)

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  6. Self-Grounded Vision: Hand Ownership Modulates Visual Location through Cortical β and γ Oscillations.

    Science.gov (United States)

    Faivre, Nathan; Dönz, Jonathan; Scandola, Michele; Dhanis, Herberto; Bello Ruiz, Javier; Bernasconi, Fosco; Salomon, Roy; Blanke, Olaf

    2017-01-04

    Vision is known to be shaped by context, defined by environmental and bodily signals. In the Taylor illusion, the size of an afterimage projected on one's hand changes according to proprioceptive signals conveying hand position. Here, we assessed whether the Taylor illusion does not just depend on the physical hand position, but also on bodily self-consciousness as quantified through illusory hand ownership. Relying on the somatic rubber hand illusion, we manipulated hand ownership, such that participants embodied a rubber hand placed next to their own hand. We found that an afterimage projected on the participant's hand drifted depending on illusory ownership between the participants' two hands, showing an implication of self-representation during the Taylor illusion. Oscillatory power analysis of electroencephalographic signals showed that illusory hand ownership was stronger in participants with stronger α suppression over left sensorimotor cortex, whereas the Taylor illusion correlated with higher β/γ power over frontotemporal regions. Higher γ connectivity between left sensorimotor and inferior parietal cortex was also found during illusory hand ownership. These data show that afterimage drifts in the Taylor illusion do not only depend on the physical hand position but also on subjective ownership, which itself is based on the synchrony of somatosensory signals from the two hands. The effect of ownership on afterimage drifts is associated with β/γ power and γ connectivity between frontoparietal regions and the visual cortex. Together, our results suggest that visual percepts are not only influenced by bodily context but are self-grounded, mapped on a self-referential frame. Vision is influenced by the body: in the Taylor illusion, the size of an afterimage projected on one's hand changes according to tactile and proprioceptive signals conveying hand position. Here, we report a new phenomenon revealing that the perception of afterimages depends not only

  7. Visual memory during pauses between successive saccades.

    Science.gov (United States)

    Gersch, Timothy M; Kowler, Eileen; Schnitzer, Brian S; Dosher, Barbara A

    2008-12-22

    Selective attention is closely linked to eye movements. Prior to a saccade, attention shifts to the saccadic goal at the expense of surrounding locations. Such a constricted attentional field, while useful to ensure accurate saccades, constrains the spatial range of high-quality perceptual analysis. The present study showed that attention could be allocated to locations other than the saccadic goal without disrupting the ongoing pattern of saccades. Saccades were made sequentially along a color-cued path. Attention was assessed by a visual memory task presented during a random pause between successive saccades. Saccadic planning had several effects on memory: (1) fewer letters were remembered during intersaccadic pauses than during maintained fixation; (2) letters appearing on the saccadic path, including locations previously examined, could be remembered; off-path performance was near chance; (3) memory was better at the saccadic target than at all other locations, including the currently fixated location. These results show that the distribution of attention during intersaccadic pauses results from a combination of top-down enhancement at the saccadic target coupled with a more automatic allocation of attention to selected display locations. This suggests that the visual system has mechanisms to control the distribution of attention without interfering with ongoing saccadic programming.

  8. Visual cognition influences early vision: the role of visual short-term memory in amodal completion.

    Science.gov (United States)

    Lee, Hyunkyu; Vecera, Shaun P

    2005-10-01

    A partly occluded visual object is perceptually filled in behind the occluding surface, a process known as amodal completion or visual interpolation. Previous research focused on the image-based properties that lead to amodal completion. In the present experiments, we examined the role of a higher-level visual process-visual short-term memory (VSTM)-in amodal completion. We measured the degree of amodal completion by asking participants to perform an object-based attention task on occluded objects while maintaining either zero or four items in visual working memory. When no items were stored in VSTM, participants completed the occluded objects; when four items were stored in VSTM, amodal completion was halted (Experiment 1). These results were not caused by the influence of VSTM on object-based attention per se (Experiment 2) or by the specific location of to-be-remembered items (Experiment 3). Items held in VSTM interfere with amodal completion, which suggests that amodal completion may not be an informationally encapsulated process, but rather can be affected by high-level visual processes.

  9. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Data visualization of temporal ozone pollution between urban and ...

    African Journals Online (AJOL)

    ... this study was conducted with the aim to assess and visualize the occurrence of potential Ozone pollution severity of two chosen locations in Selangor, Malaysia: Shah Alam (urban) and Banting (sub-urban). Data visualization analytics were employed using Ozone exceedances and Principal Component Analysis (PCA).

  11. Infrastructure and industrial location : a dual technology approach

    OpenAIRE

    Bjorvatn, Kjetil

    2001-01-01

    The paper investigates how differences in infrastructure quality may affect industrial location between countries. Employing a dualtechnology model, the main result of the paper is the somewhat surprising conclusion that an improvement in a country’s infrastructure may weaken its locational advantage and induce a firm to locate production in a country with a less efficient infrastructure.

  12. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  13. Quantitative immuno-electron microscopic analysis of depolarization-induced expression of PGC-1alpha in cultured rat visual cortical neurons.

    Science.gov (United States)

    Meng, Hui; Liang, Huan Ling; Wong-Riley, Margaret

    2007-10-17

    Peroxisome proliferator-activated receptor-gamma coactivator 1alpha (PGC- 1alpha) is a coactivator of nuclear receptors and other transcription factors that regulate several metabolic processes, including mitochondrial biogenesis, energy homeostasis, respiration, and gluconeogenesis. PGC-1alpha plays a vital role in stimulating genes that are important to oxidative metabolism and other mitochondrial functions in brown adipose tissue and skeleton muscles, but the significance of PGC-1alpha in the brain remains elusive. The goal of our present study was to determine by means of quantitative immuno-electron microscopy the expression of PGC-1alpha in cultured rat visual cortical neurons under normal conditions as well as after depolarizing stimulation for varying periods of time. Our results showed that: (a) PGC-1alpha was normally located in both the nucleus and the cytoplasm. In the nucleus, PGC-1alpha was associated mainly with euchromatin rather than heterochromatin, consistent with active involvement in transcription. In the cytoplasm, it was associated mainly with free ribosomes. (b) Neuronal depolarization by KCl for 0.5 h induced a significant increase in PGC-1alpha labeling density in both the nucleus and the cytoplasm (Pneuronal activity by synthesizing more proteins in the cytoplasm and translocating them to the nucleus for gene activation. PGC-1alpha level in neurons is, therefore, tightly regulated by neuronal activity.

  14. Visual working memory contaminates perception.

    Science.gov (United States)

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F

    2011-10-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different motion direction in visual working memory. Control experiments showed that none of a variety of alternative explanations could account for this repulsion effect induced by working memory. Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms as those involved in processing basic sensory events.

  15. The Computational Anatomy of Visual Neglect.

    Science.gov (United States)

    Parr, Thomas; Friston, Karl J

    2018-02-01

    Visual neglect is a debilitating neuropsychological phenomenon that has many clinical implications and-in cognitive neuroscience-offers an important lesion deficit model. In this article, we describe a computational model of visual neglect based upon active inference. Our objective is to establish a computational and neurophysiological process theory that can be used to disambiguate among the various causes of this important syndrome; namely, a computational neuropsychology of visual neglect. We introduce a Bayes optimal model based upon Markov decision processes that reproduces the visual searches induced by the line cancellation task (used to characterize visual neglect at the bedside). We then consider 3 distinct ways in which the model could be lesioned to reproduce neuropsychological (visual search) deficits. Crucially, these 3 levels of pathology map nicely onto the neuroanatomy of saccadic eye movements and the systems implicated in visual neglect. © The Author 2017. Published by Oxford University Press.

  16. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  17. Excessive sensitivity to uncertain visual input in L-dopa induced dyskinesias in Parkinson’s disease: further implications for cerebellar involvement

    Directory of Open Access Journals (Sweden)

    James eStevenson

    2014-02-01

    Full Text Available When faced with visual uncertainty during motor performance, humans rely more on predictive forward models and proprioception and attribute lesser importance to the ambiguous visual feedback. Though disrupted predictive control is typical of patients with cerebellar disease, sensorimotor deficits associated with the involuntary and often unconscious nature of L-dopa-induced dyskinesias in Parkinson’s disease (PD suggests dyskinetic subjects may also demonstrate impaired predictive motor control. Methods: We investigated the motor performance of 9 dyskinetic and 10 non-dyskinetic PD subjects on and off L-dopa, and of 10 age-matched control subjects, during a large-amplitude, overlearned, visually-guided tracking task. Ambiguous visual feedback was introduced by adding ‘jitter’ to a moving target that followed a Lissajous pattern. Root mean square (RMS tracking error was calculated, and ANOVA, robust multivariate linear regression and linear dynamical system analyses were used to determine the contribution of speed and ambiguity to tracking performance. Results: Increasing target ambiguity and speed contributed significantly more to the RMS error of dyskinetic subjects off medication. L-dopa improved the RMS tracking performance of both PD groups. At higher speeds, controls and PDs without dyskinesia were able to effectively de-weight ambiguous visual information. Conclusions: PDs’ visually-guided motor performance degrades with visual jitter and speed of movement to a greater degree compared to age-matched controls. However, there are fundamental differences in PDs with and without dyskinesia: subjects without dyskinesia are generally slow, and less responsive to dynamic changes in motor task requirements but, PDs with dyskinesia there was a trade-off between overall performance and inappropriate reliance on ambiguous visual feedback. This is likely associated with functional changes in posterior parietal-ponto-cerebellar pathways.

  18. The development of organized visual search

    Science.gov (United States)

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  19. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  20. Monitoring Location and Angular Orientation of a Pill

    Science.gov (United States)

    Schipper, John F.

    2012-01-01

    A mobile pill transmitter system moves through, or adjacent to, one or more organs in an animal or human body, while transmitting signals from its present location and/or present angular orientation. The system also provides signals from which the present roll angle of the pill, about a selected axis, can be determined. When the location coordinates angular orientation and the roll angle of the pill are within selected ranges, an aperture on the pill container releases a selected chemical into, or onto, the body. Optionally, the pill, as it moves, provides a sequence of visually perceptible images. The times for image formation may correspond to times at which the pill transmitter system location or image satisfies one of at least four criteria. This invention provides and supplies an algorithm for exact determination of location coordinates and angular orientation coordinates for a mobile pill transmitter (PT), or other similar device that is introduced into, and moves within, a GI tract of a human or animal body. A set of as many as eight nonlinear equations has been developed and applied, relating propagation of a wireless signal between either two, three, or more transmitting antennas located on the PT, to four or more non-coplanar receiving antennas located on a signal receiver appliance worn by the user. The equations are solved exactly, without approximations or iterations, and are applied in several environments: (1) association of a visual image, transmitted by the PT at each of a second sequence of times, with a PT location and PT angular orientation at that time; (2) determination of a position within the body at which a drug or chemical substance or other treatment is to be delivered to a selected portion of the body; (3) monitoring, after delivery, of the effect(s) of administration of the treatment; and (4) determination of one or more positions within the body where provision and examination of a finer-scale image is warranted.

  1. Attraction of position preference by spatial attention throughout human visual cortex.

    Science.gov (United States)

    Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O

    2014-10-01

    Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A Ventral Visual Stream Reading Center Independent of Sensory Modality and Visual Experience

    Directory of Open Access Journals (Sweden)

    Lior Reich

    2011-10-01

    Full Text Available The Visual Word Form Area (VWFA is a ventral-temporal-visual area that develops expertise for visual reading. It encodes letter-strings irrespective of case, font, or location in the visual-field, with striking anatomical reproducibility across individuals. In the blind, reading can be achieved using Braille, with a comparable level-of-expertise to that of sighted readers. We investigated which area plays the role of the VWFA in the blind. One would expect it to be at either parietal or bilateral occipital cortex, reflecting the tactile nature of the task and crossmodal plasticity, respectively. However, according to the notion that brain areas are task specific rather than sensory-modality specific, we predicted recruitment of the left-hemispheric VWFA, identically to the sighted and independent of visual experience. Using fMRI we showed that activation during Braille reading in congenitally blind individuals peaked in the VWFA, with striking anatomical consistency within and between blind and sighted. The VWFA was reading-selective when contrasted to high-level language and low-level sensory controls. Further preliminary results show that the VWFA is selectively activated also when people learn to read in a new language or using a different modality. Thus, the VWFA is a mutlisensory area specialized for reading regardless of visual experience.

  3. Does 3D produce more symptoms of visually induced motion sickness?

    Science.gov (United States)

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  4. The sound-induced phosphene illusion.

    Science.gov (United States)

    Bolognini, Nadia; Convento, Silvia; Fusaro, Martina; Vallar, Giuseppe

    2013-12-01

    Crossmodal illusions clearly show how perception, rather than being a modular and self-contained function, can be dramatically altered by interactions between senses. Here, we provide evidence for a novel crossmodal "physiological" illusion, showing that sounds can boost visual cortical responses in such a way to give rise to a striking illusory visual percept. In healthy participants, a single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital cortex evoked a visual percept, i.e., a phosphene. When sTMS is accompanied by two auditory beeps, the second beep induces in neurologically unimpaired participants the perception of an illusory second phosphene, namely the sound-induced phosphene illusion. This perceptual "fission" of a single phosphene, due to multiple beeps, is not matched by a "fusion" of double phosphenes due to a single beep, and it is characterized by an early auditory modulation of the TMS-induced visual responses (~80 ms). Multiple beeps also induce an illusory feeling of multiple TMS pulses on the participants' scalp, consistent with an audio-tactile fission illusion. In conclusion, an auditory stimulation may bring about a phenomenological change in the conscious visual experience produced by the transcranial stimulation of the occipital cortex, which reveals crossmodal binding mechanisms within early stages of visual processing.

  5. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.

    Science.gov (United States)

    Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J

    2013-03-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    Science.gov (United States)

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Methods and materials, for locating and studying spotted owls.

    Science.gov (United States)

    Eric D. Forsman

    1983-01-01

    Nocturnal calling surveys are the most effective and most frequently used technique for locating spotted owls. Roosts and general nest locations may be located during the day by calling in suspected roost or nest areas. Specific nest trees are located by: (1) baiting with a live mouse to induce owls to visit the nest, (2) calling in suspected nest areas to stimulate...

  8. Clinical study of the visual field defects caused by occipital lobe lesions.

    Science.gov (United States)

    Ogawa, Katsuhiko; Ishikawa, Hiroshi; Suzuki, Yutaka; Oishi, Minoru; Kamei, Satoshi

    2014-01-01

    The central visual field is projected to the region from the occipital tip to the posterior portion of the medial area in the striate cortex. However, central visual field disturbances have not been compared with the location of the lesions in the striate cortex. Thirteen patients with visual field defects caused by partial involvement of the striate cortex were enrolled. The lesions were classified according to their location into the anterior portion, the posterior portion of the medial area, and the occipital tip. Visual field defects were examined by the Goldmann perimetry, the Humphrey perimetry and the auto-plot tangent screen. We defined a defect within the central 10° of vision as a central visual field disturbance. The visual field defects in 13 patients were compared with the location of their lesions in the striate cortex. The medial area was involved in 7 patients with no involvement of the occipital tip. In 2 of them, peripheral homonymous hemianopia without central visual field disturbance was shown, and their lesions were located only in the anterior portion. One patient with a lesion in the posterior portion alone showed incomplete central homonymous hemianopia. Three of 4 patients with lesions located in both the anterior and posterior portions of the medial area showed incomplete central homonymous hemianopia and peripheral homonymous hemianopia. The occipital tip was involved in 6 patients. Five of them had small lesions in the occipital tip alone and showed complete central homonymous hemianopia or quadrantanopia. The other patient with a lesion in the lateral posterior portion and bilateral occipital tip lesions showed bilateral slight peripheral visual field disturbance in addition to complete central homonymous hemianopia on both sides. Lesions in the posterior portion of the medial area as well as the occipital tip caused central visual field disturbance in our study, as indicated in previous reports. Central homonymous hemianopia tended to

  9. Influence of front light configuration on the visual conspicuity of motorcycles

    OpenAIRE

    PINTO, Maria; CAVALLO, Viola; SAINT PIERRE, Guillaume

    2014-01-01

    A recent study (Cavallo and Pinto, 2012) showed that daytime running lights (DRLs) on cars create “visual noise” that interferes with the lighting of motorcycles and affects their visual conspicuity. In the present experiment, we tested three conspicuity enhancements designed to improve motorcycle detectability in a car-DRL environment: a triangle configuration (a central headlight plus two lights located on the rear view mirrors), a helmet configuration (a light located on the mo...

  10. A novel approach for automatic visualization and activation detection of evoked potentials induced by epidural spinal cord stimulation in individuals with spinal cord injury.

    Science.gov (United States)

    Mesbah, Samineh; Angeli, Claudia A; Keynton, Robert S; El-Baz, Ayman; Harkema, Susan J

    2017-01-01

    Voluntary movements and the standing of spinal cord injured patients have been facilitated using lumbosacral spinal cord epidural stimulation (scES). Identifying the appropriate stimulation parameters (intensity, frequency and anode/cathode assignment) is an arduous task and requires extensive mapping of the spinal cord using evoked potentials. Effective visualization and detection of muscle evoked potentials induced by scES from the recorded electromyography (EMG) signals is critical to identify the optimal configurations and the effects of specific scES parameters on muscle activation. The purpose of this work was to develop a novel approach to automatically detect the occurrence of evoked potentials, quantify the attributes of the signal and visualize the effects across a high number of scES parameters. This new method is designed to automate the current process for performing this task, which has been accomplished manually by data analysts through observation of raw EMG signals, a process that is laborious and time-consuming as well as prone to human errors. The proposed method provides a fast and accurate five-step algorithms framework for activation detection and visualization of the results including: conversion of the EMG signal into its 2-D representation by overlaying the located signal building blocks; de-noising the 2-D image by applying the Generalized Gaussian Markov Random Field technique; detection of the occurrence of evoked potentials using a statistically optimal decision method through the comparison of the probability density functions of each segment to the background noise utilizing log-likelihood ratio; feature extraction of detected motor units such as peak-to-peak amplitude, latency, integrated EMG and Min-max time intervals; and finally visualization of the outputs as Colormap images. In comparing the automatic method vs. manual detection on 700 EMG signals from five individuals, the new approach decreased the processing time from several

  11. Visual similarity in short-term recall for where and when.

    Science.gov (United States)

    Jalbert, Annie; Saint-Aubin, Jean; Tremblay, Sébastien

    2008-03-01

    Two experiments examined the effects of visual similarity on short-term recall for where and when in the visual spatial domain. A series of squares of similar or dissimilar colours were serially presented at various locations on the screen. At recall, all coloured squares were simultaneously presented in a random order at the bottom of the screen, and the locations used for presentation were indicated by white squares. Participants were asked to place the colours at their appropriate location in their presentation order. Performance for location (where) and order (when) was assessed separately. Results revealed that similarity severely hinders both memory for what was where and memory for what was when, under quiet and articulatory suppression conditions. These results provide further evidence that similarity has a major impact on processing relational information in memory.

  12. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  13. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    Science.gov (United States)

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  14. Glycopyrrolate does not influence the visual or motor-induced increase in regional cerebral perfusion

    DEFF Research Database (Denmark)

    Rokamp, Kim Z; Olesen, Niels D; Larsson, Henrik B W

    2014-01-01

    Acetylcholine may contribute to the increase in regional cerebral blood flow (rCBF) during cerebral activation since glycopyrrolate, a potent inhibitor of acetylcholine, abolishes the exercise-induced increase in middle cerebral artery mean flow velocity. We tested the hypothesis that cholinergic...... vasodilatation is important for the increase in rCBF during cerebral activation. The subjects were 11 young healthy males at an age of 24 ± 3 years (mean ± SD). We used arterial spin labeling and blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) to evaluate rCBF with and without...... intravenous glycopyrrolate during a handgrip motor task and visual stimulation. Glycopyrrolate increased heart rate from 56 ± 9 to 114 ± 14 beats/min (mean ± SD; p

  15. Dissociation of object and spatial visual processing pathways in human extrastriate cortex

    Energy Technology Data Exchange (ETDEWEB)

    Haxby, J.V.; Grady, C.L.; Horwitz, B.; Ungerleider, L.G.; Mishkin, M.; Carson, R.E.; Herscovitch, P.; Schapiro, M.B.; Rapoport, S.I. (National Institutes of Health, Bethesda, MD (USA))

    1991-03-01

    The existence and neuroanatomical locations of separate extrastriate visual pathways for object recognition and spatial localization were investigated in healthy young men. Regional cerebral blood flow was measured by positron emission tomography and bolus injections of H2(15)O, while subjects performed face matching, dot-location matching, or sensorimotor control tasks. Both visual matching tasks activated lateral occipital cortex. Face discrimination alone activated a region of occipitotemporal cortex that was anterior and inferior to the occipital area activated by both tasks. The spatial location task alone activated a region of lateral superior parietal cortex. Perisylvian and anterior temporal cortices were not activated by either task. These results demonstrate the existence of three functionally dissociable regions of human visual extrastriate cortex. The ventral and dorsal locations of the regions specialized for object recognition and spatial localization, respectively, suggest some homology between human and nonhuman primate extrastriate cortex, with displacement in human brain, possibly related to the evolution of phylogenetically newer cortical areas.

  16. Multidimensional Analysis and Location Intelligence Application for Spatial Data Warehouse Hotspot in Indonesia using SpagoBI

    Science.gov (United States)

    Uswatun Hasanah, Gamma; Trisminingsih, Rina

    2016-01-01

    Spatial data warehouse refers to data warehouse which has a spatial component that represents the geographic location of the position or an object on the Earth's surface. Spatial data warehouse can be visualized in the form of a crosstab tables, graphs, and maps. Spatial data warehouse of hotspot in Indonesia has been constructed by researchers from FIRM NASA 2006-2015. This research develops multidimensional analysis module and location intelligence module using SpagoBI. The multidimensional analysis module is able to visualize online analytical processing (OLAP). The location intelligence module creates dynamic map visualization in map zone and map point. Map zone can display the different colors based on the number of hotspot in each region and map point can display different sizes of the point to represent the number of hotspots in each region. This research is expected to facilitate users in the presentation of hotspot data as needed.

  17. Cortical deactivation induced by visual stimulation in human slow-wave sleep

    DEFF Research Database (Denmark)

    Born, Alfred Peter; Law, Ian; Lund, Torben E

    2002-01-01

    It has previously been demonstrated that sleeping and sedated young children respond with a paradoxical decrease in the blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) signal in the rostro-medial occipital visual cortex during visual stimulation. It is unreso...... that this decrease was secondary to a relative rCBF decrease. Possible mechanisms for the paradoxical response pattern during sleep include an active inhibition of the visual cortex or a disruption of an energy-consuming process...

  18. Metro-Wordle: An Interactive Visualization for Urban Text Distributions Based on Wordle

    Directory of Open Access Journals (Sweden)

    Chenlu Li

    2018-03-01

    Full Text Available With the development of cities and the explosion of information, vast amounts of geo-tagged textural data about Points of Interests (POIs have been generated. Extracting useful information and discovering text spatial distributions from the data are challenging and meaningful. Also, the huge numbers of POIs in modern cities make it important to have efficient approaches to retrieve and choose a destination. This paper provides a visual design combing metro map and wordles to meet the needs. In this visualization, metro lines serve as the divider lines splitting the city into several subareas and the boundaries to constrain wordles within each subarea. The wordles are generated from keywords extracted from the text about POIs (including reviews, descriptions, etc. and embedded into the subareas based on their geographical locations. By generating intuitive results and providing an interactive visualization to support exploring text distribution patterns, our strategy can guide the users to explore urban spatial characteristics and retrieve a location efficiently. Finally, we implement a visual analysis of the restaurants data in Shanghai, China as a case study to evaluate our strategy. Keywords: Text visualization, Location retrieval, Urban data, Metro map, Word cloud

  19. Cell-cycle-dependent drug-resistant quiescent cancer cells induce tumor angiogenesis after chemotherapy as visualized by real-time FUCCI imaging

    Science.gov (United States)

    Yano, Shuya; Takehara, Kiyoto; Tazawa, Hiroshi; Kishimoto, Hiroyuki; Urata, Yasuo; Kagawa, Shunsuke; Fujiwara, Toshiyoshi; Hoffman, Robert M.

    2017-01-01

    ABSTRACT We previously demonstrated that quiescent cancer cells in a tumor are resistant to conventional chemotherapy as visualized with a fluorescence ubiquitination cell cycle indicator (FUCCI). We also showed that proliferating cancer cells exist in a tumor only near nascent vessels or on the tumor surface as visualized with FUCCI and green fluorescent protein (GFP)-expressing tumor vessels. In the present study, we show the relationship between cell-cycle phase and chemotherapy-induced tumor angiogenesis using in vivo FUCCI real-time imaging of the cell cycle and nestin-driven GFP to detect nascent blood vessels. We observed that chemotherapy-treated tumors, consisting of mostly of quiescent cancer cells after treatment, had much more and deeper tumor vessels than untreated tumors. These newly-vascularized cancer cells regrew rapidly after chemotherapy. In contrast, formerly quiescent cancer cells decoyed to S/G2 phase by a telomerase-dependent adenovirus did not induce tumor angiogenesis. The present results further demonstrate the importance of the cancer-cell position in the cell cycle in order that chemotherapy be effective and not have the opposite effect of stimulating tumor angiogenesis and progression. PMID:27715464

  20. Separate visual representations for perception and for visually guided behavior

    Science.gov (United States)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  1. Visual attention to spatial and non-spatial visual stimuli is affected differentially by age: effects on event-related brain potentials and performance data.

    NARCIS (Netherlands)

    Talsma, D.; Kok, A.; Ridderinkhof, K.R.

    2006-01-01

    To assess selective attention processes in young and old adults, behavioral and event-related potential (ERP) measures were recorded. Streams of visual stimuli were presented from left or right locations (Experiment 1) or from a central location and comprising two different spatial frequencies

  2. Common coding of auditory and visual spatial information in working memory.

    Science.gov (United States)

    Lehnert, Günther; Zimmer, Hubert D

    2008-09-16

    We compared spatial short-term memory for visual and auditory stimuli in an event-related slow potentials study. Subjects encoded object locations of either four or six sequentially presented auditory or visual stimuli and maintained them during a retention period of 6 s. Slow potentials recorded during encoding were modulated by the modality of the stimuli. Stimulus related activity was stronger for auditory items at frontal and for visual items at posterior sites. At frontal electrodes, negative potentials incrementally increased with the sequential presentation of visual items, whereas a strong transient component occurred during encoding of each auditory item without the cumulative increment. During maintenance, frontal slow potentials were affected by modality and memory load according to task difficulty. In contrast, at posterior recording sites, slow potential activity was only modulated by memory load independent of modality. We interpret the frontal effects as correlates of different encoding strategies and the posterior effects as a correlate of common coding of visual and auditory object locations.

  3. The effects of visual fluorescence marking induced by 5-aminolevulinic acid for endoscopic diagnosis of urinary bladder cancer

    Science.gov (United States)

    Daniltchenko, Dmitri I.; Koenig, Frank; Schnorr, Dietmar; Valdman, Alexander; Al-Shukri, Salman; Loening, Stefan A.

    2003-10-01

    During cystoscopy procedure, fluorescence diagnostics induced by 5-ALA improves visual detection of the bladder cancer. Macroscopic ALA-fluorescence allows visualizing of small flat tumors, carcinoma in situ, true neoplasm margins and dysplasias of the bladder. Following ALA instillation, cystoscopy has been performed under both standard and blue light illumination. Totally, 153 biopsies have been carried out at 53 patients with suspicion of bladder cancer. The results were compared to ALA-fluorescence data. In 13% of the patients, bladder cancer and dysplasia were found out in addition, due to red fluorescence. The sensitivity and specificity of ALA-fluorescence technique aggregated 96% and 52% respectively. The sensitivity and specificity of 5-ALA-fluorescent detection exceeded standard endoscopy under white light on 20%. The new method does not exclude a false positive and a false negative fluorescent luminescence. The ALA-based fluorescence detection system enhances the diagnosis of malignant/dysplastic bladder lesions significantly.

  4. 47 CFR 73.685 - Transmitter location and antenna system.

    Science.gov (United States)

    2010-10-01

    ... Section 73.685 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... located at the most central point at the highest elevation available. To provide the best degree of... operating on Channels 14-69 with transmitters delivering a peak visual power output of more than 1 kW may...

  5. Evaluación del uso de señales visuales y de localización por el colibrí cola-ancha (Selasphorus platycercus al visitar flores de Penstemon roseus Evaluation of the use of visual and location cues by the Broad-tailed hummingbird (Selasphorus platycercusforaging in flowers of Penstemon roseus

    Directory of Open Access Journals (Sweden)

    Guillermo Pérez

    2012-03-01

    Full Text Available En los colibríes la memoria espacial desempeña un papel importante durante el forrajeo. Éste se basa en el uso de señales específicas (visuales o en señales espaciales (localización de flores y plantas con néctar. Sin embargo, el uso de estas señales por los colibríes puede variar de acuerdo con la escala espacial que enfrentan cuando visitan flores de una o más plantas durante el forrajeo; ésto se puso a prueba con individuos del colibrí cola-ancha Selasphorus platycercus. Por otro lado, para evaluar la posible variación en el uso de las señales, se llevaron a cabo experimentos en condiciones semi-naturales utilizando flores de la planta Penstemon roseus, nativa del sitio de estudio. A través de la manipulación de la presencia/ausencia de una recompensa (néctar y señales visuales, evaluamos el uso de la memoria espacial durante el forrajeo entre 2 plantas (experimento 1 y dentro de una sola planta (experimento 2. Los resultados demostraron que los colibríes utilizaron la memoria de localización de la planta de cuyas flores obtuvieron recompensa, independientemente de la presencia de señales visuales. Por el contrario, en flores individuales de una sola planta, después de un corto periodo de aprendizaje los colibríes pueden utilizar las señales visuales para guiar su forrajeo y discriminar las flores sin recompensa. Asimismo, en ausencia de señales visuales los individuos basaron su forrajeo en la memoria de localización de la flor con recompensa visitada previamente. Estos resultados sugieren plasticidad en el comportamiento de forrajeo de los colibríes influenciada por la escala espacial y por la información adquirida en visitas previas.In hummingbirds spatial memory plays an important role during foraging. It is based in use of specific cues (visual or spatial cues (location of flowers and plants with nectar. However, use of these cues by hummingbirds may change according to the spatial scale they face when visit

  6. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  7. Visual aesthetics study: Gibson Dome area, Paradox Basin, Utah

    International Nuclear Information System (INIS)

    1984-03-01

    The Visual Aesthetics study was performed as an initial assessment of concerns regarding impacts to visual resources that might be associated with the construction of a geologic nuclear waste repository and associated rail routes in the Gibson Dome location of southeastern Utah. Potential impacts to visual resources were evaluated by predicting visibility of the facility and railway routes using the US Forest Service (USFS) computer program, VIEWIT, and by applying the Bureau of Land Management (BLM) Visual Resource Management (VRM) methodology. Five proposed facility sites in the Gibson Dome area and three proposed railway routes were evaluated for visual impact. 10 references, 19 figures, 5 tables

  8. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina; Madsen, Nadia; Clausen, Jens

    2006-01-01

    An ontology is a classification model for a given domain.In information retrieval ontologies are used to perform broad searches.An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  9. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina Valentin; Madsen, Nadia Lyngaa; Clausen, Jens

    2004-01-01

    An ontology is a classification model for a given domain. In information retrieval ontologies are used to perform broad searches. An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  10. Large Field Visualization with Demand-Driven Calculation

    Science.gov (United States)

    Moran, Patrick J.; Henze, Chris

    1999-01-01

    We present a system designed for the interactive definition and visualization of fields derived from large data sets: the Demand-Driven Visualizer (DDV). The system allows the user to write arbitrary expressions to define new fields, and then apply a variety of visualization techniques to the result. Expressions can include differential operators and numerous other built-in functions, ail of which are evaluated at specific field locations completely on demand. The payoff of following a demand-driven design philosophy throughout becomes particularly evident when working with large time-series data, where the costs of eager evaluation alternatives can be prohibitive.

  11. Visual and Auditory Sensitivities and Discriminations

    National Research Council Canada - National Science Library

    Regan, David

    2003-01-01

    .... A new equation gives TTC from binocular information without involving distance. The human visual system contains a mechanism that rapidly compares contours at two distant sites so as to encode the location size and shape of an object...

  12. Distinct Neural Substrates for Maintaining Locations and Spatial Relations in Working Memory

    Directory of Open Access Journals (Sweden)

    Kara J Blacker

    2016-11-01

    Full Text Available Previous work has demonstrated a distinction between maintenance of two types of spatial information in working memory (WM: spatial locations and spatial relations. While a body of work has investigated the neural mechanisms of sensory-based information like spatial locations, little is known about how spatial relations are maintained in WM. In two experiments, we used fMRI to investigate the involvement of early visual cortex in the maintenance of spatial relations in WM. In both experiments, we found less quadrant-specific BOLD activity in visual cortex when a single spatial relation, compared to a single spatial location, was held in WM. Also across both experiments, we found a consistent set of brain regions that were differentially activated during maintenance of locations versus relations. Maintaining a location, compared to a relation, was associated with greater activity in typical spatial WM regions like posterior parietal cortex and prefrontal regions. Whereas maintaining a relation, compared to a location, was associated with greater activity in the parahippocampal gyrus and precuneus/retrosplenial cortex. Further, in Experiment 2 we manipulated WM load and included trials where participants had to maintain three spatial locations or relations. Under this high load condition, the regions sensitive to locations versus relations were somewhat different than under low load. We also identified regions that were sensitive to load specifically for location or relation maintenance, as well as overlapping regions sensitive to load more generally. These results suggest that the neural substrates underlying WM maintenance of spatial locations and relations are distinct from one another and that the neural representations of these distinct types of spatial information change with load.

  13. Analyzing Spatiotemporal Anomalies through Interactive Visualization

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2014-06-01

    Full Text Available As we move into the big data era, data grows not just in size, but also in complexity, containing a rich set of attributes, including location and time information, such as data from mobile devices (e.g., smart phones, natural disasters (e.g., earthquake and hurricane, epidemic spread, etc. We are motivated by the rising challenge and build a visualization tool for exploring generic spatiotemporal data, i.e., records containing time location information and numeric attribute values. Since the values often evolve over time and across geographic regions, we are particularly interested in detecting and analyzing the anomalous changes over time/space. Our analytic tool is based on geographic information system and is combined with spatiotemporal data mining algorithms, as well as various data visualization techniques, such as anomaly grids and anomaly bars superimposed on the map. We study how effective the tool may guide users to find potential anomalies through demonstrating and evaluating over publicly available spatiotemporal datasets. The tool for spatiotemporal anomaly analysis and visualization is useful in many domains, such as security investigation and monitoring, situation awareness, etc.

  14. A feast of visualization

    Science.gov (United States)

    2008-12-01

    Strength through structure The visualization and assessment of inner human bone structures can provide better predictions of fracture risk due to osteoporosis. Using micro-computed tomography (µCT), Christoph Räth from the Max Planck Institute for Extraterrestrial Physics and colleagues based in Munich, Vienna and Salzburg have shown how complex lattice-shaped bone structures can be visualized. The structures were quantified by calculating certain "texture measures" that yield new information about the stability of the bone. A 3D visualization showing the variation with orientation of one of the texture measures for four different bone specimens (from left to right) is shown above. Such analyses may help us to improve our understanding of disease and drug-induced changes in bone structure (C Räth et al. 2008 New J. Phys. 10 125010).

  15. Evidence for optimal integration of visual feature representations across saccades

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  16. Invertebrate neurobiology: visual direction of arm movements in an octopus.

    Science.gov (United States)

    Niven, Jeremy E

    2011-03-22

    An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Visual reinforcement shapes eye movements in visual search.

    Science.gov (United States)

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task.

  18. Visualization of induced electric fields

    NARCIS (Netherlands)

    Deursen, van A.P.J.

    2005-01-01

    A cylindrical electrolytic tank between a set of Helmholtz coils provides a classroom demonstration of induced, nonconservative electric fields. The field strength is measured by a sensor consisting of a pair of tiny spheres immersed in the liquid. The sensor signal depends on position, frequency,

  19. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    Science.gov (United States)

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Alzheimer disease: functional abnormalities in the dorsal visual pathway.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    PURPOSE: To evaluate whether patients with Alzheimer disease (AD) have altered activation compared with age-matched healthy control (HC) subjects during a task that typically recruits the dorsal visual pathway. MATERIALS AND METHODS: The study was performed in accordance with the Declaration of Helsinki, with institutional ethics committee approval, and all subjects provided written informed consent. Two tasks were performed to investigate neural function: face matching and location matching. Twelve patients with mild AD and 14 age-matched HC subjects were included. Brain activation was measured by using functional magnetic resonance imaging. Group statistical analyses were based on a mixed-effects model corrected for multiple comparisons. RESULTS: Task performance was not statistically different between the two groups, and within groups there were no differences in task performance. In the HC group, the visual perception tasks selectively activated the visual pathways. Conversely in the AD group, there was no selective activation during performance of these same tasks. Along the dorsal visual pathway, the AD group recruited additional regions, primarily in the parietal and frontal lobes, for the location-matching task. There were no differences in activation between groups during the face-matching task. CONCLUSION: The increased activation in the AD group may represent a compensatory mechanism for decreased processing effectiveness in early visual areas of patients with AD. The findings support the idea that the dorsal visual pathway is more susceptible to putative AD-related neuropathologic changes than is the ventral visual pathway.

  1. Manipulation of pre-target activity on the right frontal eye field enhances conscious visual perception in humans.

    Directory of Open Access Journals (Sweden)

    Lorena Chanes

    Full Text Available The right Frontal Eye Field (FEF is a region of the human brain, which has been consistently involved in visuo-spatial attention and access to consciousness. Nonetheless, the extent of this cortical site's ability to influence specific aspects of visual performance remains debated. We hereby manipulated pre-target activity on the right FEF and explored its influence on the detection and categorization of low-contrast near-threshold visual stimuli. Our data show that pre-target frontal neurostimulation has the potential when used alone to induce enhancements of conscious visual detection. More interestingly, when FEF stimulation was combined with visuo-spatial cues, improvements remained present only for trials in which the cue correctly predicted the location of the subsequent target. Our data provide evidence for the causal role of the right FEF pre-target activity in the modulation of human conscious vision and reveal the dependence of such neurostimulatory effects on the state of activity set up by cue validity in the dorsal attentional orienting network.

  2. A case of epilepsy induced by eating or by visual stimuli of food made of minced meat.

    Science.gov (United States)

    Mimura, Naoya; Inoue, Takeshi; Shimotake, Akihiro; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke

    2017-08-31

    We report a 34-year-old woman with eating epilepsy induced not only by eating but also seeing foods made of minced meat. In her early 20s of age, she started having simple partial seizures (SPS) as flashback and epigastric discomfort induced by particular foods. When she was 33 years old, she developed SPS, followed by secondarily generalized tonic-clonic seizure (sGTCS) provoked by eating a hot dog, and 6 months later, only seeing the video of dumpling. We performed video electroencephalogram (EEG) monitoring while she was seeing the video of soup dumpling, which most likely caused sGTCS. Ictal EEG showed rhythmic theta activity in the left frontal to mid-temporal area, followed by generalized seizure pattern. In this patient, seizures were provoked not only by eating particular foods but also by seeing these. This suggests a form of epilepsy involving visual stimuli.

  3. Estimating the timing and location of shallow rainfall-induced landslides using a model for transient, unsaturated infiltration

    Science.gov (United States)

    Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.

    2010-01-01

    Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.

  4. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    Science.gov (United States)

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  5. Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.

    Science.gov (United States)

    Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami

    2017-09-01

    Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Do Tonic Itch and Pain Stimuli Draw Attention towards Their Location?

    Directory of Open Access Journals (Sweden)

    Antoinette I. M. van Laarhoven

    2017-01-01

    Full Text Available Background. Although itch and pain are distinct experiences, both are unpleasant, may demand attention, and interfere with daily activities. Research investigating the role of attention in tonic itch and pain stimuli, particularly whether attention is drawn to the stimulus location, is scarce. Methods. In the somatosensory attention task, fifty-three healthy participants were exposed to 35-second electrical itch or pain stimuli on either the left or right wrist. Participants responded as quickly as possible to visual targets appearing at the stimulated location (ipsilateral trials or the arm without stimulation (contralateral trials. During control blocks, participants performed the visual task without stimulation. Attention allocation at the itch and pain location is inferred when responses are faster ipsilaterally than contralaterally. Results. Results did not indicate that attention was directed towards or away from the itch and pain location. Notwithstanding, participants were slower during itch and pain than during control blocks. Conclusions. In contrast with our hypotheses, no indications were found for spatial attention allocation towards the somatosensory stimuli. This may relate to dynamic shifts in attention over the time course of the tonic sensations. Our secondary finding that itch and pain interfere with task performance is in-line with attention theories of bodily perception.

  7. Long-Term Visuo-Gustatory Appetitive and Aversive Conditioning Potentiate Human Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Gert R. J. Christoffersen

    2017-09-01

    Full Text Available Human recognition of foods and beverages are often based on visual cues associated with flavors. The dynamics of neurophysiological plasticity related to acquisition of such long-term associations has only recently become the target of investigation. In the present work, the effects of appetitive and aversive visuo-gustatory conditioning were studied with high density EEG-recordings focusing on late components in the visual evoked potentials (VEPs, specifically the N2-P3 waves. Unfamiliar images were paired with either a pleasant or an unpleasant juice and VEPs evoked by the images were compared before and 1 day after the pairings. In electrodes located over posterior visual cortex areas, the following changes were observed after conditioning: the amplitude from the N2-peak to the P3-peak increased and the N2 peak delay was reduced. The percentage increase of N2-to-P3 amplitudes was asymmetrically distributed over the posterior hemispheres despite the fact that the images were bilaterally symmetrical across the two visual hemifields. The percentage increases of N2-to-P3 amplitudes in each experimental subject correlated with the subject’s evaluation of positive or negative hedonic valences of the two juices. The results from 118 scalp electrodes gave surface maps of theta power distributions showing increased power over posterior visual areas after the pairings. Source current distributions calculated from swLORETA revealed that visual evoked currents rose as a result of conditioning in five cortical regions—from primary visual areas and into the inferior temporal gyrus (ITG. These learning-induced changes were seen after both appetitive and aversive training while a sham trained control group showed no changes. It is concluded that long-term visuo-gustatory conditioning potentiated the N2-P3 complex, and it is suggested that the changes are regulated by the perceived hedonic valence of the US.

  8. Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning

    Science.gov (United States)

    Bartolucci, Marco; Smith, Andrew T.

    2011-01-01

    Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…

  9. Optimal Facility Location Tool for Logistics Battle Command (LBC)

    Science.gov (United States)

    2015-08-01

    64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems

  10. Face processing is gated by visual spatial attention

    Directory of Open Access Journals (Sweden)

    Roy E Crist

    2008-03-01

    Full Text Available Human perception of faces is widely believed to rely on automatic processing by a domain-specifi c, modular component of the visual system. Scalp-recorded event-related potential (ERP recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specifi c N170 enhancement has been reported to be largely immune to the infl uence of endogenous processes such as task strategy or attention. However, most studies examining the infl uence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specifi c processing is immune to the infl uence of spatial attention. This question was addressed in a dual-visualstream ERP study in which the infl uence of spatial attention on the face-specifi c N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a signifi cant face-specifi c N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specifi c processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these fi ndings underscore the extensive infl uence that top-down attention exercises over the processing of

  11. Modulation of early cortical processing during divided attention to non-contiguous locations.

    Science.gov (United States)

    Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J

    2014-05-01

    We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Retinal Pigmented Epithelial Cells Obtained from Human Induced Pluripotent Stem Cells Possess Functional Visual Cycle Enzymes in Vitro and in Vivo*

    Science.gov (United States)

    Maeda, Tadao; Lee, Mee Jee; Palczewska, Grazyna; Marsili, Stefania; Tesar, Paul J.; Palczewski, Krzysztof; Takahashi, Masayo; Maeda, Akiko

    2013-01-01

    Differentiated retinal pigmented epithelial (RPE) cells have been obtained from human induced pluripotent stem (hiPS) cells. However, the visual (retinoid) cycle in hiPS-RPE cells has not been adequately examined. Here we determined the expression of functional visual cycle enzymes in hiPS-RPE cells compared with that of isolated wild-type mouse primary RPE (mpRPE) cells in vitro and in vivo. hiPS-RPE cells appeared morphologically similar to mpRPE cells. Notably, expression of certain visual cycle proteins was maintained during cell culture of hiPS-RPE cells, whereas expression of these same molecules rapidly decreased in mpRPE cells. Production of the visual chromophore, 11-cis-retinal, and retinosome formation also were documented in hiPS-RPE cells in vitro. When mpRPE cells with luciferase activity were transplanted into the subretinal space of mice, bioluminance intensity was preserved for >3 months. Additionally, transplantation of mpRPE into blind Lrat−/− and Rpe65−/− mice resulted in the recovery of visual function, including increased electrographic signaling and endogenous 11-cis-retinal production. Finally, when hiPS-RPE cells were transplanted into the subretinal space of Lrat−/− and Rpe65−/− mice, their vision improved as well. Moreover, histological analyses of these eyes displayed replacement of dysfunctional RPE cells by hiPS-RPE cells. Together, our results show that hiPS-RPE cells can exhibit a functional visual cycle in vitro and in vivo. These cells could provide potential treatment options for certain blinding retinal degenerative diseases. PMID:24129572

  13. Standard-Fractionated Radiotherapy for Optic Nerve Sheath Meningioma: Visual Outcome Is Predicted by Mean Eye Dose

    Energy Technology Data Exchange (ETDEWEB)

    Abouaf, Lucie [Neuro-Ophthalmology Unit, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); Girard, Nicolas [Radiotherapy-Oncology Department, Lyon Sud Hospital, Hospices Civils de Lyon, Lyon (France); Claude Bernard University, Lyon (France); Lefort, Thibaud [Neuro-Radiology Department, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); D' hombres, Anne [Claude Bernard University, Lyon (France); Tilikete, Caroline; Vighetto, Alain [Neuro-Ophthalmology Unit, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); Claude Bernard University, Lyon (France); Mornex, Francoise, E-mail: francoise.mornex@chu-lyon.fr [Claude Bernard University, Lyon (France)

    2012-03-01

    Purpose: Radiotherapy has shown its efficacy in controlling optic nerve sheath meningiomas (ONSM) tumor growth while allowing visual acuity to improve or stabilize. However, radiation-induced toxicity may ultimately jeopardize the functional benefit. The purpose of this study was to identify predictive factors of poor visual outcome in patients receiving radiotherapy for ONSM. Methods and Materials: We conducted an extensive analysis of 10 patients with ONSM with regard to clinical, radiologic, and dosimetric aspects. All patients were treated with conformal radiotherapy and subsequently underwent biannual neuroophthalmologic and imaging assessments. Pretreatment and posttreatment values of visual acuity and visual field were compared with Wilcoxon's signed rank test. Results: Visual acuity values significantly improved after radiotherapy. After a median follow-up time of 51 months, 6 patients had improved visual acuity, 4 patients had improved visual field, 1 patient was in stable condition, and 1 patient had deteriorated visual acuity and visual field. Tumor control rate was 100% at magnetic resonance imaging assessment. Visual acuity deterioration after radiotherapy was related to radiation-induced retinopathy in 2 patients and radiation-induced mature cataract in 1 patient. Study of radiotherapy parameters showed that the mean eye dose was significantly higher in those 3 patients who had deteriorated vision. Conclusions: Our study confirms that radiotherapy is efficient in treating ONSM. Long-term visual outcome may be compromised by radiation-induced side effects. Mean eye dose has to be considered as a limiting constraint in treatment planning.

  14. Deployment of spatial attention towards locations in memory representations. An EEG study.

    Science.gov (United States)

    Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J

    2013-01-01

    Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.

  15. Deployment of spatial attention towards locations in memory representations. An EEG study.

    Directory of Open Access Journals (Sweden)

    Marcin Leszczyński

    Full Text Available Recalling information from visual short-term memory (VSTM involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.

  16. Using location tracking data to assess efficiency in established clinical workflows.

    Science.gov (United States)

    Meyer, Mark; Fairbrother, Pamela; Egan, Marie; Chueh, Henry; Sandberg, Warren S

    2006-01-01

    Location tracking systems are becoming more prevalent in clinical settings yet applications still are not common. We have designed a system to aid in the assessment of clinical workflow efficiency. Location data is captured from active RFID tags and processed into usable data. These data are stored and presented visually with trending capability over time. The system allows quick assessments of the impact of process changes on workflow, and isolates areas for improvement.

  17. [Are Visual Field Defects Reversible? - Visual Rehabilitation with Brains].

    Science.gov (United States)

    Sabel, B A

    2017-02-01

    Visual field defects are considered irreversible because the retina and optic nerve do not regenerate. Nevertheless, there is some potential for recovery of the visual fields. This can be accomplished by the brain, which analyses and interprets visual information and is able to amplify residual signals through neuroplasticity. Neuroplasticity refers to the ability of the brain to change its own functional architecture by modulating synaptic efficacy. This is actually the neurobiological basis of normal learning. Plasticity is maintained throughout life and can be induced by repetitively stimulating (training) brain circuits. The question now arises as to how plasticity can be utilised to activate residual vision for the treatment of visual field loss. Just as in neurorehabilitation, visual field defects can be modulated by post-lesion plasticity to improve vision in glaucoma, diabetic retinopathy or optic neuropathy. Because almost all patients have some residual vision, the goal is to strengthen residual capacities by enhancing synaptic efficacy. New treatment paradigms have been tested in clinical studies, including vision restoration training and non-invasive alternating current stimulation. While vision training is a behavioural task to selectively stimulate "relative defects" with daily vision exercises for the duration of 6 months, treatment with alternating current stimulation (30 min. daily for 10 days) activates and synchronises the entire retina and brain. Though full restoration of vision is not possible, such treatments improve vision, both subjectively and objectively. This includes visual field enlargements, improved acuity and reaction time, improved orientation and vision related quality of life. About 70 % of the patients respond to the therapies and there are no serious adverse events. Physiological studies of the effect of alternating current stimulation using EEG and fMRI reveal massive local and global changes in the brain. These include

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Picture book exposure elicits positive visual preferences in toddlers.

    Science.gov (United States)

    Houston-Price, Carmel; Burton, Eliza; Hickinson, Rachel; Inett, Jade; Moore, Emma; Salmon, Katherine; Shiba, Paula

    2009-09-01

    Although the relationship between "mere exposure" and attitude enhancement is well established in the adult domain, there has been little similar work with children. This article examines whether toddlers' visual attention toward pictures of foods can be enhanced by repeated visual exposure to pictures of foods in a parent-administered picture book. We describe three studies that explored the number and nature of exposures required to elicit positive visual preferences for stimuli and the extent to which induced preferences generalize to other similar items. Results show that positive preferences for stimuli are easily and reliably induced in children and, importantly, that this effect of exposure is not restricted to the exposed stimulus per se but also applies to new representations of the exposed item.

  20. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  1. Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture.

    Science.gov (United States)

    Bosen, Adam K; Fleming, Justin T; Brown, Sarah E; Allen, Paul D; O'Neill, William E; Paige, Gary D

    2016-12-01

    Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.

  2. Altered functional brain connectivity in patients with visually induced dizziness

    Directory of Open Access Journals (Sweden)

    Angelique Van Ombergen

    2017-01-01

    Conclusions: We found alterations in the visual and vestibular cortical network in VID patients that could underlie the typical VID symptoms such as a worsening of their vestibular symptoms when being exposed to challenging visual stimuli. These preliminary findings provide the first insights into the underlying functional brain connectivity in VID patients. Future studies should extend these findings by employing larger sample sizes, by investigating specific task-based paradigms in these patients and by exploring the implications for treatment.

  3. Visual disamenities from off-shore wind farms in Denmark

    DEFF Research Database (Denmark)

    Ladenburg, Jacob; Dubgaard, Alex; Tranberg, Jesper

    2006-01-01

    Expansion of the off-shore wind power plays a significant role in the energy policies of many EU countries. However, off-shore wind farms create visual disamenities. These disamenities can be reduced by locating wind farms at larger distances from the coast – and accepting higher costs per k......Wh produced. Base on the choices among alternative wind farm outlays, the preferences for reducing visual disamenities of off-shore wind farms were elicited using the Choice Experiment Method. The results show a clear picture; the respondents in three independent samples are willing to pay for mowing future...... off-shore wind farms away from the shore to reduce the wind farms visibility. However, the results also denote that the preferences vary with regards to the experiences with visual disamenities of off-shore wind farms. The respondents Horns Revs sample, where the off-shore wind farm is located...

  4. Premotor activations in response to visually presented single letters depend on the hand used to write: a study on left-handers.

    Science.gov (United States)

    Longcamp, Marieke; Anton, Jean-Luc; Roth, Muriel; Velay, Jean-Luc

    2005-01-01

    In a previous fMRI study on right-handers (Rhrs), we reported that part of the left ventral premotor cortex (BA6) was activated when alphabetical characters were passively observed and that the same region was also involved in handwriting [Longcamp, M., Anton, J. L., Roth, M., & Velay, J. L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19, 1492-1500]. We therefore suggested that letter-viewing may induce automatic involvement of handwriting movements. In the present study, in order to confirm this hypothesis, we carried out a similar fMRI experiment on a group of left-handed subjects (Lhrs). We reasoned that if the above assumption was correct, visual perception of letters by Lhrs might automatically activate cortical motor areas coding for left-handed writing movements, i.e., areas located in the right hemisphere. The visual stimuli used here were either single letters, single pseudoletters, or a control stimulus. The subjects were asked to watch these stimuli attentively, and no response was required. The results showed that a ventral premotor cortical area (BA6) in the right hemisphere was specifically activated when Lhrs looked at letters and not at pseudoletters. This right area was symmetrically located with respect to the left one activated under the same circumstances in Rhrs. This finding supports the hypothesis that visual perception of written language evokes covert motor processes. In addition, a bilateral area, also located in the premotor cortex (BA6), but more ventrally and medially, was found to be activated in response to both letters and pseudoletters. This premotor region, which was not activated correspondingly in Rhrs, might be involved in the processing of graphic stimuli, whatever their degree of familiarity.

  5. Survival Processing Enhances Visual Search Efficiency.

    Science.gov (United States)

    Cho, Kit W

    2018-05-01

    Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.

  6. Visual Attention in Posterior Stroke and Relations to Alexia

    DEFF Research Database (Denmark)

    Petersen, Anders; Vangkilde, Signe; Fabricius, Charlotte

    2016-01-01

    that reduced visual speed and span may explain pure alexia. Eight patients with unilateral PCA strokes (four left hemisphere, four right hemisphere) were selected on the basis of lesion location, rather than the presence of any visual symptoms. Visual attention was characterized by a whole report paradigm......Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere, while attentional effects of more posterior lesions are less clear. Commonly, such deficits are investigated in relation to specific syndromes like visual...... agnosia or pure alexia. The aim of this study was to characterize visual processing speed and apprehension span following posterior cerebral artery (PCA) stroke. In addition, the relationship between these attentional parameters and single word reading is investigated, as previous studies have suggested...

  7. A tissue phantom for visualization and measurement of ultrasound-induced cavitation damage.

    Science.gov (United States)

    Maxwell, Adam D; Wang, Tzu-Yin; Yuan, Lingqian; Duryea, Alexander P; Xu, Zhen; Cain, Charles A

    2010-12-01

    Many ultrasound studies involve the use of tissue-mimicking materials to research phenomena in vitro and predict in vivo bioeffects. We have developed a tissue phantom to study cavitation-induced damage to tissue. The phantom consists of red blood cells suspended in an agarose hydrogel. The acoustic and mechanical properties of the gel phantom were found to be similar to soft tissue properties. The phantom's response to cavitation was evaluated using histotripsy. Histotripsy causes breakdown of tissue structures by the generation of controlled cavitation using short, focused, high-intensity ultrasound pulses. Histotripsy lesions were generated in the phantom and kidney tissue using a spherically focused 1-MHz transducer generating 15 cycle pulses, at a pulse repetition frequency of 100 Hz with a peak negative pressure of 14 MPa. Damage appeared clearly as increased optical transparency of the phantom due to rupture of individual red blood cells. The morphology of lesions generated in the phantom was very similar to that generated in kidney tissue at both macroscopic and cellular levels. Additionally, lesions in the phantom could be visualized as hypoechoic regions on a B-mode ultrasound image, similar to histotripsy lesions in tissue. High-speed imaging of the optically transparent phantom was used to show that damage coincides with the presence of cavitation. These results indicate that the phantom can accurately mimic the response of soft tissue to cavitation and provide a useful tool for studying damage induced by acoustic cavitation. Copyright © 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  8. Efficacy and complications of radiotherapy of anterior visual pathway tumors

    International Nuclear Information System (INIS)

    Capo, H.; Kupersmith, M.J.

    1991-01-01

    A progressive disturbance in visual acuity or visual field, along with an unexplained optic nerve atrophy, suggests the possibility of a tumor. Tumors that frequently affect the anterior visual pathway include primary optic nerve sheath meningiomas, intracranial meningiomas, optic gliomas, pituitary tumors, and craniopharyngiomas. The location of these tumors sometimes prohibits a complete surgical excision that might jeopardize the visual system. Radiation therapy, however, can be beneficial in these cases. This article reviews the indications for radiotherapy of tumors that involve the anterior visual pathway, along with the possible complications. Cases that present the effect of radiation therapy and radiation damage are presented.131 references

  9. Conscious visual memory with minimal attention.

    Science.gov (United States)

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. The influence of experimentally induced pain on shoulder muscle activity

    DEFF Research Database (Denmark)

    Diederichsen, L.P.; Winther, A.; Dyhre-Poulsen, P.

    2009-01-01

    muscles. EMG was recorded before pain, during pain and after pain had subsided and pain intensity was continuously scored on a visual analog scale (VAS). During abduction, experimentally induced pain in the supraspinatus muscle caused a significant decrease in activity of the anterior deltoid, upper......-105A degrees) at a speed of approximately 120A degrees/s, controlled by a metronome. During abduction, electromyographic (EMG) activity was recorded by intramuscular wire electrodes inserted in two deeply located shoulder muscles and by surface-electrodes over six superficially located shoulder...... trapezius and the infraspinatus and an increase in activity of lower trapezius and latissimus dorsi muscles. Following subacromial injection a significantly increased muscle activity was seen in the lower trapezius, the serratus anterior and the latissimus dorsi muscles. In conclusion, this study shows...

  11. [Effect of acupuncture on pattern-visual evoked potential in rats with monocular visual deprivation].

    Science.gov (United States)

    Yan, Xing-Ke; Dong, Li-Li; Liu, An-Guo; Wang, Jun-Yan; Ma, Chong-Bing; Zhu, Tian-Tian

    2013-08-01

    To explore electrophysiology mechanism of acupuncture for treatment and prevention of visual deprivation effect. Eighteen healthy 15-day Evans rats were randomly divided into a normal group, a model group and an acupuncture group, 6 rats in each one. Deprivation amblyopia model was established by monocular eyelid suture in the model group and acupuncture group. Acupuncture was applied at "Jingming" (BL 1), "Chengqi" (ST 1), "Qiuhou" (EX-HN 7) and "Cuanzhu" (BL 2) in the acupuncture group. The bilateral acupoints were selected alternately, one side for a day, and totally 14 days were required. The effect of acupuncture on visual evoked potential in different spatial frequencies was observed. Under three different kinds of spatial frequencies of 2 X 2, 4 X 4 and 8 X 8, compared with normal group, there was obvious visual deprivation effect in the model group where P1 peak latency was delayed (P0.05). Under spatial frequency of 4 X 4, N1-P1 amplitude value was maximum in the normal group and acupuncture group. With this spatial frequency the rat's eye had best resolving ability, indicating it could be the best spatial frequency for rat visual system. The visual system has obvious electrophysiology plasticity in sensitive period. Acupuncture treatment could adjust visual deprivation-induced suppression and slow of visual response in order to antagonism deprivation effect.

  12. Location of microseismic swarms induced by salt solution mining

    Science.gov (United States)

    Kinscher, J.; Bernard, P.; Contrucci, I.; Mangeney, A.; Piguet, J. P.; Bigarre, P.

    2015-01-01

    Ground failures, caving processes and collapses of large natural or man-made underground cavities can produce significant socio-economic damages and represent a serious risk envisaged by the mine managements and municipalities. In order to improve our understanding of the mechanisms governing such a geohazard and to test the potential of geophysical methods to prevent them, the development and collapse of a salt solution mining cavity was monitored in the Lorraine basin in northeastern France. During the experiment, a huge microseismic data set (˜50 000 event files) was recorded by a local microseismic network. 80 per cent of the data comprised unusual swarming sequences with complex clusters of superimposed microseismic events which could not be processed through standard automatic detection and location routines. Here, we present two probabilistic methods which provide a powerful tool to assess the spatio-temporal characteristics of these swarming sequences in an automatic manner. Both methods take advantage of strong attenuation effects and significantly polarized P-wave energies at higher frequencies (>100 Hz). The first location approach uses simple signal amplitude estimates for different frequency bands, and an attenuation model to constrain the hypocentre locations. The second approach was designed to identify significantly polarized P-wave energies and the associated polarization angles which provide very valuable information on the hypocentre location. Both methods are applied to a microseismic data set recorded during an important step of the development of the cavity, that is, before its collapse. From our results, systematic spatio-temporal epicentre migration trends are observed in the order of seconds to minutes and several tens of meters which are partially associated with cyclic behaviours. In addition, from spatio-temporal distribution of epicentre clusters we observed similar epicentre migration in the order of hours and days. All together, we

  13. Fermeuse wind power project Newfoundland : noise and visual analysis studies

    Energy Technology Data Exchange (ETDEWEB)

    Henn, P.; Turgeon, J.; Heraud, P.; Belanger, S.; Dakousian, S.; Lamontagne, C.; Soares, D. [Helimax Energy Inc., Montreal, PQ (Canada); Basil, C.; Boulianne, S.; Salacup, S.; Thompson, C. [Skypower, Toronto, ON (Canada)

    2008-03-15

    This paper discussed the noise and visual analyses used to assess the potential impacts of a wind energy project on the east coast of the Avalon Peninsula near St. John's, Newfoundland. The proposed farm will be located approximately 1 km away from the town of Fermeuse, and will have an installed capacity of 27 MW from 9 turbines. The paper provided details of the consultation process conducted to determine acceptable distance and site locations for the wind turbines from the community. Stakeholders were identified during meetings, events, and discussions with local authorities. Consultations were also held with government agencies and municipal councils. A baseline acoustic environment study was conducted, and details of anticipated environmental impacts during the project's construction, operation, and decommissioning phases were presented. The visual analysis study was divided into the following landscape units: town, shoreline, forest, open land and lacustrine landscapes. The effect of the turbines on the landscapes were assessed from different viewpoints using visual simulation programs. The study showed that the visual effects of the project are not considered as significant because of the low number of turbines. It was concluded that the effect of construction on ambient noise levels is of low concern as all permanent dwellings are located at least 1 km away from the turbines. 2 refs., 4 tabs., 4 figs.

  14. Learning the association between a context and a target location in infancy.

    Science.gov (United States)

    Bertels, Julie; San Anton, Estibaliz; Gebuis, Titia; Destrebecqz, Arnaud

    2017-07-01

    Extracting the statistical regularities present in the environment is a central learning mechanism in infancy. For instance, infants are able to learn the associations between simultaneously or successively presented visual objects (Fiser & Aslin, ; Kirkham, Slemmer & Johnson, ). The present study extends these results by investigating whether infants can learn the association between a target location and the context in which it is presented. With this aim, we used a visual associative learning procedure inspired by the contextual cuing paradigm, with infants from 8 to 12 months of age. In two experiments, in which we varied the complexity of the stimuli, we first habituated infants to several scenes where the location of a target (a cartoon character) was consistently associated with a context, namely a specific configuration of geometrical shapes. Second, we examined whether infants learned the covariation between the target location and the context by measuring looking times at scenes that either respected or violated the association. In both experiments, results showed that infants learned the target-context associations, as they looked longer at the familiar scenes than at the novel ones. In particular, infants selected clusters of co-occurring contextual shapes and learned the covariation between the target location and this subset. These results support the existence of a powerful and versatile statistical learning mechanism that may influence the orientation of infants' visual attention toward areas of interest in their environment during early developmental stages. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=9Hm1unyLBn0. © 2016 John Wiley & Sons Ltd.

  15. Visualization system for grid environment in the nuclear field

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Matsumoto, Nobuko; Idomura, Yasuhiro; Tani, Masayuki

    2006-01-01

    An innovative scientific visualization system is needed to integratedly visualize large amount of data which are distributedly generated in remote locations as a result of a large-scale numerical simulation using a grid environment. One of the important functions in such a visualization system is a parallel visualization which enables to visualize data using multiple CPUs of a supercomputer. The other is a distributed visualization which enables to execute visualization processes using a local client computer and remote computers. We have developed a toolkit including these functions in cooperation with the commercial visualization software AVS/Express, called Parallel Support Toolkit (PST). PST can execute visualization processes with three kinds of parallelism (data parallelism, task parallelism and pipeline parallelism) using local and remote computers. We have evaluated PST for large amount of data generated by a nuclear fusion simulation. Here, two supercomputers Altix3700Bx2 and Prism installed in JAEA are used. From the evaluation, it can be seen that PST has a potential to efficiently visualize large amount of data in a grid environment. (author)

  16. A Capacitated Location-Allocation Model for Flood Disaster Service Operations with Border Crossing Passages and Probabilistic Demand Locations

    DEFF Research Database (Denmark)

    Mirzapour, S. A.; Wong, K. Y.; Govindan, K.

    2013-01-01

    , a p-center location problem is considered in order to determine the locations of some relief rooms in a city and their corresponding allocation clusters. This study presents a mixed integer nonlinear programming model of a capacitated facility location-allocation problem which simultaneously considers...... the probabilistic distribution of demand locations and a fixed line barrier in a region. The proposed model aims at minimizing the maximum expected weighted distance from the relief rooms to all the demand regions in order to decrease the evacuation time of people from the affected areas before flood occurrence......Potential consequences of flood disasters, including severe loss of life and property, induce emergency managers to find the appropriate locations of relief rooms to evacuate people from the origin points to a safe place in order to lessen the possible impact of flood disasters. In this research...

  17. The Effect of Visual, Spatial and Temporal Manipulations on Embodiment and Action

    Science.gov (United States)

    Ratcliffe, Natasha; Newport, Roger

    2017-01-01

    The feeling of owning and controlling the body relies on the integration and interpretation of sensory input from multiple sources with respect to existing representations of the bodily self. Illusion paradigms involving multisensory manipulations have demonstrated that while the senses of ownership and agency are strongly related, these two components of bodily experience may be dissociable and differentially affected by alterations to sensory input. Importantly, however, much of the current literature has focused on the application of sensory manipulations to external objects or virtual representations of the self that are visually incongruent with the viewer’s own body and which are not part of the existing body representation. The current experiment used MIRAGE-mediated reality to investigate how manipulating the visual, spatial and temporal properties of the participant’s own hand (as opposed to a fake/virtual limb) affected embodiment and action. Participants viewed two representations of their right hand inside a MIRAGE multisensory illusions box with opposing visual (normal or grossly distorted), temporal (synchronous or asynchronous) and spatial (precise real location or false location) manipulations applied to each hand. Subjective experiences of ownership and agency towards each hand were measured alongside an objective measure of perceived hand location using a pointing task. The subjective sense of agency was always anchored to the synchronous hand, regardless of physical appearance and location. Subjective ownership also moved with the synchronous hand, except when both the location and appearance of the synchronous limb were incongruent with that of the real limb. Objective pointing measures displayed a similar pattern, however movement synchrony was not sufficient to drive a complete shift in perceived hand location, indicating a greater reliance on the spatial location of the real hand. The results suggest that while the congruence of self

  18. The Effect of Visual, Spatial and Temporal Manipulations on Embodiment and Action

    Directory of Open Access Journals (Sweden)

    Natasha Ratcliffe

    2017-05-01

    Full Text Available The feeling of owning and controlling the body relies on the integration and interpretation of sensory input from multiple sources with respect to existing representations of the bodily self. Illusion paradigms involving multisensory manipulations have demonstrated that while the senses of ownership and agency are strongly related, these two components of bodily experience may be dissociable and differentially affected by alterations to sensory input. Importantly, however, much of the current literature has focused on the application of sensory manipulations to external objects or virtual representations of the self that are visually incongruent with the viewer’s own body and which are not part of the existing body representation. The current experiment used MIRAGE-mediated reality to investigate how manipulating the visual, spatial and temporal properties of the participant’s own hand (as opposed to a fake/virtual limb affected embodiment and action. Participants viewed two representations of their right hand inside a MIRAGE multisensory illusions box with opposing visual (normal or grossly distorted, temporal (synchronous or asynchronous and spatial (precise real location or false location manipulations applied to each hand. Subjective experiences of ownership and agency towards each hand were measured alongside an objective measure of perceived hand location using a pointing task. The subjective sense of agency was always anchored to the synchronous hand, regardless of physical appearance and location. Subjective ownership also moved with the synchronous hand, except when both the location and appearance of the synchronous limb were incongruent with that of the real limb. Objective pointing measures displayed a similar pattern, however movement synchrony was not sufficient to drive a complete shift in perceived hand location, indicating a greater reliance on the spatial location of the real hand. The results suggest that while the

  19. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  20. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Science.gov (United States)

    Meyer, Georg F; Shao, Fei; White, Mark D; Hopkins, Carl; Robotham, Antony J

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  1. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  2. Location, location, location: Extracting location value from house prices

    OpenAIRE

    Kolbe, Jens; Schulz, Rainer; Wersing, Martin; Werwatz, Axel

    2012-01-01

    The price for a single-family house depends both on the characteristics of the building and on its location. We propose a novel semiparametric method to extract location values from house prices. After splitting house prices into building and land components, location values are estimated with adaptive weight smoothing. The adaptive estimator requires neither strong smoothness assumptions nor local symmetry. We apply the method to house transactions from Berlin, Germany. The estimated surface...

  3. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    Science.gov (United States)

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. 3D Visualization of Global Ocean Circulation

    Science.gov (United States)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  5. Hand movement deviations in a visual search task with cross modal cuing

    Directory of Open Access Journals (Sweden)

    Hürol Aslan

    2007-01-01

    Full Text Available The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants’ reaction times, we paid special attention to tracking the hand movements toward the target. According to the results, the auditory stimuli unassociated with the target locations slightly –but significantly- increased the deviation of the hand movement from the path leading to the target location. The increase in the deviation depended on the degree of association between auditory stimuli and target locations, albeit not on the level of detail in the instructions about the task.

  6. A laser-induced-fluorescence visualization study of transverse, sonic fuel injection in a nonreacting supersonic combustor

    Science.gov (United States)

    Mcdaniel, J. C.; Graves, J., Jr.

    1986-01-01

    The present paper reports work which has been conducted in the first phase of a research program which is to provide a data base of spatially-resolved measurements in nonreacting supersonic combustors. In the measurements, a nonintrusive diagnostic technique based on the utilization of laser-induced fluorescence (LIF) is employed. The reported work had the objective to conduct LIF visualization studies of the injection of a simulated fuel into a Mach 2.07 airstream for comparison with corresponding numerical calculations. Attention is given to injection from a single orifice into a constant-area duct, injection from a single orifice behind a rearward-facing step, and injection from staged orifices behind a rearward-facing step.

  7. Positive mood broadens visual attention to positive stimuli.

    Science.gov (United States)

    Wadlinger, Heather A; Isaacowitz, Derek M

    2006-03-01

    In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.

  8. Accuracy of working length determination with root ZX apex locator ...

    African Journals Online (AJOL)

    The purpose of this study was to clinically compare working length (WL) determination with root ZX apex locator and radiography, and then compare them with direct visualization method ex vivo. A total of 75 maxillary central and lateral incisors were selected. Working length determination was carried out using radiographic ...

  9. Anosognosia for obvious visual field defects in stroke patients.

    Science.gov (United States)

    Baier, Bernhard; Geber, Christian; Müller-Forell, Wiebke; Müller, Notger; Dieterich, Marianne; Karnath, Hans-Otto

    2015-01-01

    Patients with anosognosia for visual field defect (AVFD) fail to recognize consciously their visual field defect. There is still unclarity whether specific neural correlates are associated with AVFD. We studied AVFD in 54 patients with acute stroke and a visual field defect. Nineteen percent of this unselected sample showed AVFD. By using modern voxelwise lesion-behaviour mapping techniques we found an association between AVFD and parts of the lingual gyrus, the cuneus as well as the posterior cingulate and corpus callosum. Damage to these regions appears to induce unawareness of visual field defects and thus may play a significant role for conscious visual perception.

  10. Visual Working Memory Capacity for Emotional Facial Expressions

    Directory of Open Access Journals (Sweden)

    Domagoj Švegar

    2011-12-01

    Full Text Available The capacity of visual working memory is limited to no more than four items. At the same time, it is limited not only by the number of objects, but also by the total amount of information that needs to be memorized, and the relation between the information load per object and the number of objects that can be stored into visual working memory is inverse. The objective of the present experiment was to compute visual working memory capacity for emotional facial expressions, and in order to do so, change detection tasks were applied. Pictures of human emotional facial expressions were presented to 24 participants in 1008 experimental trials, each of which began with a presentation of a fixation mark, which was followed by a short simultaneous presentation of six emotional facial expressions. After that, a blank screen was presented, and after such inter-stimulus interval, one facial expression was presented at one of previously occupied locations. Participants had to answer if the facial expression presented at test is different or identical as the expression presented at that same location before the retention interval. Memory capacity was estimated through accuracy of responding, by the formula constructed by Pashler (1988, adopted from signal detection theory. It was found that visual working memory capacity for emotional facial expressions equals 3.07, which is high compared to capacity for facial identities and other visual stimuli. The obtained results were explained within the framework of evolutionary psychology.

  11. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    Science.gov (United States)

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  12. Physical and Visual Accessibilities in Intensive Care Units: A Comparative Study of Open-Plan and Racetrack Units.

    Science.gov (United States)

    Rashid, Mahbub; Khan, Nayma; Jones, Belinda

    2016-01-01

    This study compared physical and visual accessibilities and their associations with staff perception and interaction behaviors in 2 intensive care units (ICUs) with open-plan and racetrack layouts. For the study, physical and visual accessibilities were measured using the spatial analysis techniques of Space Syntax. Data on staff perception were collected from 81 clinicians using a questionnaire survey. The locations of 2233 interactions, and the location and length of another 339 interactions in these units were collected using systematic field observation techniques. According to the study, physical and visual accessibilities were different in the 2 ICUs, and clinicians' primary workspaces were physically and visually more accessible in the open-plan ICU. Physical and visual accessibilities affected how well clinicians' knew their peers and where their peers were located in these units. Physical and visual accessibilities also affected clinicians' perception of interaction and communication and of teamwork and collaboration in these units. Additionally, physical and visual accessibilities showed significant positive associations with interaction behaviors in these units, with the open-plan ICU showing stronger associations. However, physical accessibilities were less important than visual accessibilities in relation to interaction behaviors in these ICUs. The implications of these findings for ICU design are discussed.

  13. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  14. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  15. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    Science.gov (United States)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  16. X-ray visualization of a mosquito's head

    International Nuclear Information System (INIS)

    Kikuchi, Kenji; Mochizuki, Osamu

    2007-01-01

    A technology to visualize an internal anatomy of living animals has developed for a medical diagnostics and biology by using Synchrotron x-ray produced in a Photon Factory. A dynamic motion of organ, muscles and respiratory of small insects is difficult to observe by using conventional x-ray imaging because of luck of special and temporal resolution. We visualized motions of pumps located in a mosquito's head through a Phase-contrast X-ray imaging technique by using a synchrotron X-ray. Isovue370 was fed with a 10% dilute glucose solution to visualize a flow. We found that the phase difference between the motions of an oral cavity pump and pharynx pump was 180 degrees. (author)

  17. The Role of Local and Distal Landmarks in the Development of Object Location Memory

    Science.gov (United States)

    Bullens, Jessie; Klugkist, Irene; Postma, Albert

    2011-01-01

    To locate objects in the environment, animals and humans use visual and nonvisual information. We were interested in children's ability to relocate an object on the basis of self-motion and local and distal color cues for orientation. Five- to 9-year-old children were tested on an object location memory task in which, between presentation and…

  18. Saturation in Phosphene Size with Increasing Current Levels Delivered to Human Visual Cortex.

    Science.gov (United States)

    Bosking, William H; Sun, Ping; Ozker, Muge; Pei, Xiaomei; Foster, Brett L; Beauchamp, Michael S; Yoshor, Daniel

    2017-07-26

    Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices. SIGNIFICANCE STATEMENT Understanding the neural basis for phosphenes, the visual percepts created by electrical stimulation of visual cortex, is fundamental to the development of a visual cortical prosthetic. Our experiments in human subjects implanted with electrodes over visual cortex show that it is the activity of a large population of cells spread out across several millimeters of tissue that supports the perception of a phosphene. In addition, we describe an important feature of the production of phosphenes by

  19. Cerebral versus Ocular Visual Impairment: The Impact on Developmental Neuroplasticity.

    Science.gov (United States)

    Martín, Maria B C; Santos-Lozano, Alejandro; Martín-Hernández, Juan; López-Miguel, Alberto; Maldonado, Miguel; Baladrón, Carlos; Bauer, Corinna M; Merabet, Lotfi B

    2016-01-01

    Cortical/cerebral visual impairment (CVI) is clinically defined as significant visual dysfunction caused by injury to visual pathways and structures occurring during early perinatal development. Depending on the location and extent of damage, children with CVI often present with a myriad of visual deficits including decreased visual acuity and impaired visual field function. Most striking, however, are impairments in visual processing and attention which have a significant impact on learning, development, and independence. Within the educational arena, current evidence suggests that strategies designed for individuals with ocular visual impairment are not effective in the case of CVI. We propose that this variance may be related to differences in compensatory neuroplasticity related to the type of visual impairment, as well as underlying alterations in brain structural connectivity. We discuss the etiology and nature of visual impairments related to CVI, and how advanced neuroimaging techniques (i.e., diffusion-based imaging) may help uncover differences between ocular and cerebral causes of visual dysfunction. Revealing these differences may help in developing future strategies for the education and rehabilitation of individuals living with visual impairment.

  20. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  1. Orienting Attention within Visual Short-Term Memory: Development and Mechanisms

    Science.gov (United States)

    Shimi, Andria; Nobre, Anna C.; Astle, Duncan; Scerif, Gaia

    2014-01-01

    How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to…

  2. Sweep visually evoked potentials and visual findings in children with West syndrome.

    Science.gov (United States)

    de Freitas Dotto, Patrícia; Cavascan, Nívea Nunes; Berezovsky, Adriana; Sacai, Paula Yuri; Rocha, Daniel Martins; Pereira, Josenilson Martins; Salomão, Solange Rios

    2014-03-01

    West syndrome (WS) is a type of early childhood epilepsy characterized by progressive neurological development deterioration that includes vision. To demonstrate the clinical importance of grating visual acuity thresholds (GVA) measurement by sweep visually evoked potentials technique (sweep-VEP) as a reliable tool for evaluation of the visual cortex status in WS children. This is a retrospective study of the best-corrected binocular GVA and ophthalmological features of WS children referred for the Laboratory of Clinical Electrophysiology of Vision of UNIFESP from 1998 to 2012 (Committee on Ethics in Research of UNIFESP n° 0349/08). The GVA deficit was calculated by subtracting binocular GVA score (logMAR units) of each patient from the median values of age norms from our own lab and classified as mild (0.1-0.39 logMAR), moderate (0.40-0.80 logMAR) or severe (>0.81 logMAR). Associated ophthalmological features were also described. Data from 30 WS children (age from 6 to 108 months, median = 14.5 months, mean ± SD = 22.0 ± 22.1 months; 19 male) were analyzed. The majority presented severe GVA deficit (0.15-1.44 logMAR; mean ± SD = 0.82 ± 0.32 logMAR; median = 0.82 logMAR), poor visual behavior, high prevalence of strabismus and great variability in ocular positioning. The GVA deficit did not vary according to gender (P = .8022), WS type (P = .908), birth age (P = .2881), perinatal oxygenation (P = .7692), visual behavior (P = .8789), ocular motility (P = .1821), nystagmus (P = .2868), risk of drug-induced retinopathy (P = .4632) and participation in early visual stimulation therapy (P = .9010). The sweep-VEP technique is a reliable tool to classify visual system impairment in WS children, in agreement with the poor visual behavior exhibited by them. Copyright © 2013 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  3. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  4. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-03

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

  5. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities.

    Science.gov (United States)

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks

  6. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    Directory of Open Access Journals (Sweden)

    Valerio Santangelo

    2018-02-01

    Full Text Available Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010 to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory in one spatial location. The analysis of the independent components (ICs revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC. The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among

  7. Social media mining and visualization for point-of-interest recommendation

    Institute of Scientific and Technical Information of China (English)

    Ren Xingyi; Song Meina; E Haihong; Song Junde

    2017-01-01

    With the rapid growth of location-based social networks (LBSNs),point-of-interest (POI) recommendation has become an important research problem.As one of the most representative social media platforms,Twitter provides various real-life information for POI recommendation in real time.Despite that POI recommendation has been actively studied,tweet images have not been well utilized for this research problem.State-of-the-art visual features like convolutional neural network (CNN) features have shown significant performance gains over the traditional bag-of-visual-words in unveiling the image's semantics.Unfortunately,they have not been employed for POI recommendation from social websites.Hence,how to make the most of tweet images to improve the performance of POI recommendation and visualization remains open.In this paper,we thoroughly study the impact of tweet images on POI recommendation for different POI categories using various visual features.A novel topic model called social media Twitter-latent Dirichlet allocation (SM-TwitterLDA) which jointly models five Twitter features,(i.e.,text,image,location,timestamp and hashtag) is designed to discover POIs from the sheer amount of tweets.Moreover,each POI is visualized by representative images selected on three predefined criteria.Extensive experiments have been conducted on a real-life tweet dataset to verify the effectiveness of our method.

  8. Drawing the Line Between Constituent Structure and Coherence Relations in Visual Narratives

    NARCIS (Netherlands)

    Cohn, Neil; Bender, Patrick

    2017-01-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic

  9. Self-motivated visual scanning predicts flexible navigation in a virtual environment

    Directory of Open Access Journals (Sweden)

    Elisabeth Jeannette Ploran

    2014-01-01

    Full Text Available The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  10. Spatial probability aids visual stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Michael Druker

    2010-08-01

    Full Text Available We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming and if they were near the high probability hotspot (probability cuing. In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features.

  11. Performance on selected visual and auditory subtests of the Wechsler Memory Scale-Fourth Edition during laboratory-induced pain.

    Science.gov (United States)

    Etherton, Joseph L; Tapscott, Brian E

    2015-01-01

    Although chronic pain patients commonly report problems with concentration and memory, recent research indicates that induced pain alone causes little or no impairment on several Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests, suggesting that cognitive complaints in chronic pain may be attributable to factors other than pain. The current studies examined potential effects of induced pain on Wechsler Memory Scale-Fourth Edition (WMS-IV) visual working memory index (VWM) subtests (Experiment 1, n = 32) and on the immediate portions of WMS-IV auditory memory (IAM) subtests (Experiment 2, n = 55). In both studies, participants were administered one of two subtests (Symbol Span or Spatial Addition for Experiment 1; Logical Memory or Verbal Paired Associates for Experiment 2) normally and were then administered the alternate subtest while experiencing either cold pressor pain induction or a nonpainful control condition. Results indicate that induced pain in nonclinical volunteers did not impair performance on either VWM or IAM performance, suggesting that pain alone does not account for complaints or deficits in these domains in chronic pain patients. Nonpainful variables such as sleep deprivation or emotional disturbance may be responsible for reported cognitive complaints in chronic pain patients.

  12. Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis.

    Science.gov (United States)

    Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A

    2012-08-01

    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

  13. Multisensory stimuli improve relative localisation judgments compared to unisensory auditory or visual stimuli

    OpenAIRE

    Bizley, Jennifer; Wood, Katherine; Freeman, Laura

    2018-01-01

    Observers performed a relative localisation task in which they reported whether the second of two sequentially presented signals occurred to the left or right of the first. Stimuli were detectability-matched auditory, visual, or auditory-visual signals and the goal was to compare changes in performance with eccentricity across modalities. Visual performance was superior to auditory at the midline, but inferior in the periphery, while auditory-visual performance exceeded both at all locations....

  14. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  15. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    Science.gov (United States)

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  16. Discrete capacity limits and neuroanatomical correlates of visual short-term memory for objects and spatial locations.

    Science.gov (United States)

    Konstantinou, Nikos; Constantinidou, Fofi; Kanai, Ryota

    2017-02-01

    Working memory is responsible for keeping information in mind when it is no longer in view, linking perception with higher cognitive functions. Despite such crucial role, short-term maintenance of visual information is severely limited. Research suggests that capacity limits in visual short-term memory (VSTM) are correlated with sustained activity in distinct brain areas. Here, we investigated whether variability in the structure of the brain is reflected in individual differences of behavioral capacity estimates for spatial and object VSTM. Behavioral capacity estimates were calculated separately for spatial and object information using a novel adaptive staircase procedure and were found to be unrelated, supporting domain-specific VSTM capacity limits. Voxel-based morphometry (VBM) analyses revealed dissociable neuroanatomical correlates of spatial versus object VSTM. Interindividual variability in spatial VSTM was reflected in the gray matter density of the inferior parietal lobule. In contrast, object VSTM was reflected in the gray matter density of the left insula. These dissociable findings highlight the importance of considering domain-specific estimates of VSTM capacity and point to the crucial brain regions that limit VSTM capacity for different types of visual information. Hum Brain Mapp 38:767-778, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Finding Hidden Location Patterns of Two Competitive Supermarkets in Thailand

    Science.gov (United States)

    Khumsri, Jinattaporn; Fujihara, Akihiro

    There are two famous supermarkets in Thailand: Big C and Lotus. They are the highest competitive supermarkets whose hold the most market share by lots of promotions and also gather all convenience services including banking, restaurant, and others. In recent years, they gradually expand their stores and they take a similar strategy to determine where to locate a store. It is important for them to consider store allocation to obtain new customers efficiently. To consider this, we gather geographical locations of these supermarkets from Twitter using Twitter API. We gathered tweets having these supermarket names and geotags for seven months. To extract hidden location patterns from gathered data, we introduce location motif which is a directed subgraph whose edges are linked to every pair of the shortest-distance opponent node. We investigate every possible configuration of location motif when they have a small number of nodes and find that the configuration increases exponentially. We also visualize location motifs generated from gathered data on the map of Thailand and count the frequency of observed location motifs. As a result, we find that even if the possible location motifs exponentially increase as the number of nodes grows, limited location motifs can be observed. Using location motif, we successfully find an evidence of biased store allocation in reality.

  18. Linking crowding, visual span, and reading.

    Science.gov (United States)

    He, Yingchen; Legge, Gordon E

    2017-09-01

    The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.

  19. SEM method for direct visual tracking of nanoscale morphological changes of platinum based electrocatalysts on fixed locations upon electrochemical or thermal treatments.

    Science.gov (United States)

    Zorko, Milena; Jozinović, Barbara; Bele, Marjan; Hodnik, Nejc; Gaberšček, Miran

    2014-05-01

    A general method for tracking morphological surface changes on a nanometer scale with scanning electron microscopy (SEM) is introduced. We exemplify the usefulness of the method by showing consecutive SEM images of an identical location before and after the electrochemical and thermal treatments of platinum-based nanoparticles deposited on a high surface area carbon. Observations reveal an insight into platinum based catalyst degradation occurring during potential cycling treatment. The presence of chloride clearly increases the rate of degradation. At these conditions the dominant degradation mechanism seems to be the platinum dissolution with some subsequent redeposition on the top of the catalyst film. By contrast, at the temperature of 60°C, under potentiostatic conditions some carbon corrosion and particle aggregation was observed. Temperature treatment simulating the annealing step of the synthesis reveals sintering of small platinum based composite aggregates into uniform spherical particles. The method provides a direct proof of induced surface phenomena occurring on a chosen location without the usual statistical uncertainty in usual, random SEM observations across relatively large surface areas. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Near-field visual acuity of pigeons: effects of head location and stimulus luminance.

    Science.gov (United States)

    Hodos, W; Leibowitz, R W; Bonbright, J C

    1976-03-01

    Two pigeons were trained to discriminate a grating stimulus from a blank stimulus of equivalent luminance in a three-key chamber. The stimuli and blanks were presented behind a transparent center key. The procedure was a conditional discrimination in which pecks on the left key were reinforced if the blank had been present behind the center key and pecks on the right key were reinforced if the grating had been present behind the center key. The spatial frequency of the stimuli was varied in each session from four to 29.5 lines per millimeter in accordance with a variation of the method of constant stimuli. The number of lines per millimeter that the subjects could discriminate at threshold was determined from psychometric functions. Data were collected at five values of stimulus luminance ranging from--0.07 to 3.29 log cd/m2. The distance from the stimulus to the anterior nodal point of the eye, which was determined from measurements taken from high-speed motion-picture photographs of three additional pigeons and published intraocular measurements, was 62.0 mm. This distance and the grating detection thresholds were used to calculate the visual acuity of the birds at each level of luminance. Acuity improved with increasing luminance to a peak value of 0.52, which corresponds to a visual angle of 1.92 min, at a luminance of 2.33 log cd/m2. Further increase in luminance produced a small decline in acuity.

  1. Adaptive behavior of neighboring neurons during adaptation-induced plasticity of orientation tuning in V1

    Directory of Open Access Journals (Sweden)

    Shumikhina Svetlana

    2009-12-01

    Full Text Available Abstract Background Sensory neurons display transient changes of their response properties following prolonged exposure to an appropriate stimulus (adaptation. In adult cat primary visual cortex, orientation-selective neurons shift their preferred orientation after being adapted to a non-preferred orientation. The direction of those shifts, towards (attractive or away (repulsive from the adapter depends mostly on adaptation duration. How the adaptive behavior of a neuron is related to that of its neighbors remains unclear. Results Here we show that in most cases (75%, cells shift their preferred orientation in the same direction as their neighbors. We also found that cells shifting preferred orientation differently from their neighbors (25% display three interesting properties: (i larger variance of absolute shift amplitude, (ii wider tuning bandwidth and (iii larger range of preferred orientations among the cluster of cells. Several response properties of V1 neurons depend on their location within the cortical orientation map. Our results suggest that recording sites with both attractive and repulsive shifts following adaptation may be located in close proximity to iso-orientation domain boundaries or pinwheel centers. Indeed, those regions have a more diverse orientation distribution of local inputs that could account for the three properties above. On the other hand, sites with all cells shifting their preferred orientation in the same direction could be located within iso-orientation domains. Conclusions Our results suggest that the direction and amplitude of orientation preference shifts in V1 depend on location within the orientation map. This anisotropy of adaptation-induced plasticity, comparable to that of the visual cortex itself, could have important implications for our understanding of visual adaptation at the psychophysical level.

  2. Sequential pattern data mining and visualization

    Science.gov (United States)

    Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  3. Visual and tactile length matching in spatial neglect.

    Science.gov (United States)

    Bisiach, Edoardo; McIntosh, Robert D; Dijkerman, H Chris; McClements, Kevin I; Colombo, Mariarosa; Milner, A David

    2004-01-01

    Previous studies have shown that many patients with spatial neglect underestimate the horizontal extent of leftwardly located shapes (presented on screen or on paper) relative to rightwardly located shapes. This has been used to help explain their leftward biases in line bisection. In the present study we have tested patients with right hemisphere damage, either with or without neglect, on a comparable length matching task, but using 3-dimensional objects. The task was executed first visually without tactile contact, and second through touch without vision. In both sense modalities, we found that patients with neglect, but not those without, tended to underestimate leftward located objects relative to rightward located objects, differing significantly in this regard from healthy subjects. However these lateral biases were not as frequent or as pronounced as in previous studies using 2-D visual shapes. Despite the similar asymmetries in the two sense modalities, we found only a small correlation between them, and clear double dissociations were observed among our patients. We conclude that leftward length underestimation cannot be attributed to any one single cause. First it cannot be entirely due to impairments in the visual pathways, such as hemianopia and/or processing biases, since the disorder is also seen in the tactile modality. At the same time, however, length underestimation phenomena cannot be fully explained as a disruption of a supramodal central size processor, since they can occur in either vision or touch alone. Our data would fit best with a multiple-factor model in which some patients show leftward length underestimation for modality-specific reasons, while others do so due to a more high-level disruption of size judgements.

  4. Age-related changes in visual exploratory behavior in a natural scene setting.

    Science.gov (United States)

    Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J; Brandt, Stephan A

    2013-01-01

    Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  5. Age-related changes in visual exploratory behavior in a natural scene setting

    Directory of Open Access Journals (Sweden)

    Johanna eHamel

    2013-06-01

    Full Text Available Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view. To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations, induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  6. Model for 3D-visualization of streams and techno-economic estimate of locations for construction of small hydropower plants

    International Nuclear Information System (INIS)

    Izeiroski, Subija

    2012-01-01

    The main researches of this dissertation are focused to a development of a model for preliminary assesment of the hydro power potentials for small hydropower plants construction using Geographic Information System - GIS. For this purpose, in the first part of dissertation is developed a contemporary methodological approach for 3D- visualization of the land surface and river streams in a GIS platform. In the methodology approach, as input graphical data are used digitized maps in scale 1:25000, where each map covers an area of 10x14 km and consists of many layers with graphic data in shape (vector) format. Using GIS tools, from the input point and isohyetal contour data layers with different interpolation techniques have been obtained digital elevation model - DEM, which further is used for determination of additional graphic maps with useful land surface parameters such as: slope raster maps, hill shade models of the surface, different maps with hydrologic parameters and many others. The main focus of researches is directed toward the developing of contemporary methodological approaches based on GIS systems, for assessment of the hydropower potentials and selection of suitable location for small hydropower plant construction - SHPs, and especially in the mountainous hilly area that are rich with water resources. For this purpose it is done a practical analysis at a study area which encompasses the watershed area of the Brajchanska River at the east part of Prespa Lake. The main accent considering the analysis of suitable locations for SHP construction is set to the techno-engineering criteria, and in this context is made a topographic analysis regarding the slope (gradient) either of all as well of particular river streams. It is also made a hydrological analysis regarding the flow rates (discharges). The slope analysis is executed at a pixel (cell) level a swell as at a segment (line) level along a given stream. The slope value at segment level gives in GIS

  7. Fusion and rivalry are dependent on the perceptual meaning of visual stimuli.

    Science.gov (United States)

    Andrews, Timothy J; Lotto, R Beau

    2004-03-09

    We view the world with two eyes and yet are typically only aware of a single, coherent image. Arguably the simplest explanation for this is that the visual system unites the two monocular stimuli into a common stream that eventually leads to a single coherent sensation. However, this notion is inconsistent with the well-known phenomenon of rivalry; when physically different stimuli project to the same retinal location, the ensuing perception alternates between the two monocular views in space and time. Although fundamental for understanding the principles of binocular vision and visual awareness, the mechanisms under-lying binocular rivalry remain controversial. Specifically, there is uncertainty about what determines whether monocular images undergo fusion or rivalry. By taking advantage of the perceptual phenomenon of color contrast, we show that physically identical monocular stimuli tend to rival-not fuse-when they signify different objects at the same location in visual space. Conversely, when physically different monocular stimuli are likely to represent the same object at the same location in space, fusion is more likely to result. The data suggest that what competes for visual awareness in the two eyes is not the physical similarity between images but the similarity in their perceptual/empirical meaning.

  8. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    Science.gov (United States)

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  9. Displacement of location in illusory line motion.

    Science.gov (United States)

    Hubbard, Timothy L; Ruppel, Susan E

    2013-05-01

    Six experiments examined displacement in memory for the location of the line in illusory line motion (ILM; appearance or disappearance of a stationary cue is followed by appearance of a stationary line that is presented all at once, but the stationary line is perceived to "unfold" or "be drawn" from the end closest to the cue to the end most distant from the cue). If ILM was induced by having a single cue appear, then memory for the location of the line was displaced toward the cue, and displacement was larger if the line was closer to the cue. If ILM was induced by having one of two previously visible cues vanish, then memory for the location of the line was displaced away from the cue that vanished. In general, the magnitude of displacement increased and then decreased as retention interval increased from 50 to 250 ms and from 250 to 450 ms, respectively. Displacement of the line (a) is consistent with a combination of a spatial averaging of the locations of the cue and the line with a relatively weaker dynamic in the direction of illusory motion, (b) might be implemented in a spreading activation network similar to networks previously suggested to implement displacement resulting from implied or apparent motion, and (c) provides constraints and challenges for theories of ILM.

  10. The modulation of simple reaction time by the spatial probability of a visual stimulus

    Directory of Open Access Journals (Sweden)

    Carreiro L.R.R.

    2003-01-01

    Full Text Available Simple reaction time (SRT in response to visual stimuli can be influenced by many stimulus features. The speed and accuracy with which observers respond to a visual stimulus may be improved by prior knowledge about the stimulus location, which can be obtained by manipulating the spatial probability of the stimulus. However, when higher spatial probability is achieved by holding constant the stimulus location throughout successive trials, the resulting improvement in performance can also be due to local sensory facilitation caused by the recurrent spatial location of a visual target (position priming. The main objective of the present investigation was to quantitatively evaluate the modulation of SRT by the spatial probability structure of a visual stimulus. In two experiments the volunteers had to respond as quickly as possible to the visual target presented on a computer screen by pressing an optic key with the index finger of the dominant hand. Experiment 1 (N = 14 investigated how SRT changed as a function of both the different levels of spatial probability and the subject's explicit knowledge about the precise probability structure of visual stimulation. We found a gradual decrease in SRT with increasing spatial probability of a visual target regardless of the observer's previous knowledge concerning the spatial probability of the stimulus. Error rates, below 2%, were independent of the spatial probability structure of the visual stimulus, suggesting the absence of a speed-accuracy trade-off. Experiment 2 (N = 12 examined whether changes in SRT in response to a spatially recurrent visual target might be accounted for simply by sensory and temporally local facilitation. The findings indicated that the decrease in SRT brought about by a spatially recurrent target was associated with its spatial predictability, and could not be accounted for solely in terms of sensory priming.

  11. A Capacitated Location-Allocation Model for Flood Disaster Service Operations with Border Crossing Passages and Probabilistic Demand Locations

    Directory of Open Access Journals (Sweden)

    Seyed Ali Mirzapour

    2013-01-01

    Full Text Available Potential consequences of flood disasters, including severe loss of life and property, induce emergency managers to find the appropriate locations of relief rooms to evacuate people from the origin points to a safe place in order to lessen the possible impact of flood disasters. In this research, a p-center location problem is considered in order to determine the locations of some relief rooms in a city and their corresponding allocation clusters. This study presents a mixed integer nonlinear programming model of a capacitated facility location-allocation problem which simultaneously considers the probabilistic distribution of demand locations and a fixed line barrier in a region. The proposed model aims at minimizing the maximum expected weighted distance from the relief rooms to all the demand regions in order to decrease the evacuation time of people from the affected areas before flood occurrence. A real-world case study has been carried out to examine the effectiveness and applicability of the proposed model.

  12. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Visual Geolocations. Repurposing online data to design alternative views

    Directory of Open Access Journals (Sweden)

    Gabriele Colombo

    2017-04-01

    Full Text Available Data produced by humans and machines is more and more heterogeneous, visual, and location based. This availability inspired in the last years a number of reactions from researchers, designers, and artists that, using different visual manipulations techniques, have attempted at repurposing this material to add meaning and design new perspectives with specific intentions. Three different approaches are described here: the design of interfaces for exploring satellite footage in novel ways, the analysis of urban esthetics through the visual manipulation of collections of user-generated contents, and the enrichment of geo-based datasets with the selection and rearrangement of web imagery.

  14. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  15. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  16. Organization and visualization of medical images in radiotherapy

    International Nuclear Information System (INIS)

    Lorang, T.

    2001-05-01

    In modern radiotherapy, various imaging equipment is used to acquire views from inside human bodies. Tomographic imaging equipment is acquiring stacks of cross-sectional images, software implementations derive three-dimensional volumes from planar images to allow for visualization of reconstructed cross-sections at any orientation and location and higher-level visualization systems allow for transparent views and surface rendering. Of upcoming interest in radiotherapy is mutual information, the integration of information from multiple imaging equipment res. from the same imaging equipment at different time stamps and varying acquisition parameters. Huge amounts of images are acquired nowadays at radiotherapy centers, requiring organization of images with respect to patient, acquisition and equipment to allow for visualization of images in a comparative and integrative manner. Especially for integration of image information from different equipment, geometrical information is required to allow for registration of images res. volumes. DICOM 3.0 has been introduced as a standard for information interchange with respect to medical imaging. Geometric information of cross-sections, demographic information of patients and medical information of acquisitions and equipment are covered by this standard, allowing for a high-level automation with respect to organization and visualization of medical images. Reconstructing cross-sectional images from volumes at any orientation and location is required for the purpose of registration and multi-planar views. Resampling and addressing of discrete volume data need be implemented efficiently to allow for simultaneous visualization of multiple cross-sectional images, especially with respect to multiple, non-isotropy volume data sets. (author)

  17. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    Science.gov (United States)

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  18. Sustained Splits of Attention within versus across Visual Hemifields Produce Distinct Spatial Gain Profiles.

    Science.gov (United States)

    Walter, Sabrina; Keitel, Christian; Müller, Matthias M

    2016-01-01

    Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.

  19. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  20. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  1. Visual Resource Analysis for Solar Energy Zones in the San Luis Valley

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Robert [Argonne National Laboratory (ANL), Argonne, IL (United States). Environmental Science Division; Abplanalp, Jennifer M. [Argonne National Laboratory (ANL), Argonne, IL (United States). Environmental Science Division; Zvolanek, Emily [Argonne National Laboratory (ANL), Argonne, IL (United States). Environmental Science Division; Brown, Jeffery [Bureau of Land Management, Washington, DC (United States). Dept. of the Interior

    2016-01-01

    This report summarizes the results of a study conducted by Argonne National Laboratory’s (Argonne’s) Environmental Science Division for the U.S. Department of the Interior Bureau of Land Management (BLM). The study analyzed the regional effects of potential visual impacts of solar energy development on three BLM-designated solar energy zones (SEZs) in the San Luis Valley (SLV) in Colorado, and, based on the analysis, made recommendations for or against regional compensatory mitigation to compensate residents and other stakeholders for the potential visual impacts to the SEZs. The analysis was conducted as part of the solar regional mitigation strategy (SRMS) task conducted by BLM Colorado with assistance from Argonne. Two separate analyses were performed. The first analysis, referred to as the VSA Analysis, analyzed the potential visual impacts of solar energy development in the SEZs on nearby visually sensitive areas (VSAs), and, based on the impact analyses, made recommendations for or against regional compensatory mitigation. VSAs are locations for which some type of visual sensitivity has been identified, either because the location is an area of high scenic value or because it is a location from which people view the surrounding landscape and attach some level of importance or sensitivity to what is seen from the location. The VSA analysis included both BLM-administered lands in Colorado and in the Taos FO in New Mexico. The second analysis, referred to as the SEZ Analysis, used BLM visual resource inventory (VRI) and other data on visual resources in the former Saguache and La Jara Field Offices (FOs), now contained within the San Luis Valley FO (SLFO), to determine whether the changes in scenic values that would result from the development of utility-scale solar energy facilities in the SEZs would affect the quality and quantity of valued scenic resources in the SLV region as a whole. If the regional effects were judged to be significant, regional

  2. The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades

    OpenAIRE

    Boon, Paul J.; Belopolsky, Artem V.; Theeuwes, Jan

    2016-01-01

    Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which ...

  3. Comparison of Diagnostic Accuracy between Octopus 900 and Goldmann Kinetic Visual Fields

    Directory of Open Access Journals (Sweden)

    Fiona J. Rowe

    2014-01-01

    Full Text Available Purpose. To determine diagnostic accuracy of kinetic visual field assessment by Octopus 900 perimetry compared with Goldmann perimetry. Methods. Prospective cross section evaluation of 40 control subjects with full visual fields and 50 patients with known visual field loss. Comparison of test duration and area measurement of isopters for Octopus 3, 5, and 10°/sec stimulus speeds. Comparison of test duration and type of visual field classification for Octopus versus Goldmann perimetry. Results were independently graded for presence/absence of field defect and for type and location of defect. Statistical evaluation comprised of ANOVA and paired t test for evaluation of parametric data with Bonferroni adjustment. Bland Altman and Kappa tests were used for measurement of agreement between data. Results. Octopus 5°/sec perimetry had comparable test duration to Goldmann perimetry. Octopus perimetry reliably detected type and location of visual field loss with visual fields matched to Goldmann results in 88.8% of results (K=0.775. Conclusions. Kinetic perimetry requires individual tailoring to ensure accuracy. Octopus perimetry was reproducible for presence/absence of visual field defect. Our screening protocol when using Octopus perimetry is 5°/sec for determining boundaries of peripheral isopters and 3°/sec for blind spot mapping with further evaluation of area of field loss for defect depth and size.

  4. Functionally Independent Components of the Late Positive Event-Related Potential During Visual Spatial Attention

    National Research Council Canada - National Science Library

    Makeig, Scott; Westeifleld, Marissa; Jung, Tzyy-Ping; Covington, James; Townsend, Jeanne; Sejnowski, Terrence J; Courchesne, Eric

    1999-01-01

    Human event-related potentials (ERPs) were recorded from 10 subjects presented with visual target and nontarget stimuli at five screen locations and responding to targets presented at one of the locations...

  5. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    Science.gov (United States)

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  6. An event-related potential study on the interaction between lighting level and stimulus spatial location

    Directory of Open Access Journals (Sweden)

    Luis eCarretié

    2015-11-01

    Full Text Available Due to heterogeneous photoreceptor distribution, spatial location of stimulation is crucial to study visual brain activity in different light environments. This unexplored issue was studied through occipital event-related potentials (ERPs recorded from 40 participants in response to discrete visual stimuli presented at different locations and in two environmental light conditions, low mesopic (L, 0.03 lux and high mesopic (H, 6.5 lux, characterized by a differential photoreceptor activity balance: rod>cone and rodlocation of stimulation: differences were greater in response to peripheral stimuli than to stimuli presented at fixation. Moreover, in the former case, significance of L vs. H differences was even stronger in response to stimuli presented at the horizontal than at the vertical periphery. These low vs. high mesopic differences may be explained by photoreceptor activation and their retinal distribution, and confirm that ERPs discriminate between rod- and cone-originated visual processing.

  7. Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.

    Science.gov (United States)

    Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T

    2012-01-02

    Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. A new method for mapping perceptual biases across visual space.

    Science.gov (United States)

    Finlayson, Nonie J; Papageorgiou, Andriani; Schwarzkopf, D Samuel

    2017-08-01

    How we perceive the environment is not stable and seamless. Recent studies found that how a person qualitatively experiences even simple visual stimuli varies dramatically across different locations in the visual field. Here we use a method we developed recently that we call multiple alternatives perceptual search (MAPS) for efficiently mapping such perceptual biases across several locations. This procedure reliably quantifies the spatial pattern of perceptual biases and also of uncertainty and choice. We show that these measurements are strongly correlated with those from traditional psychophysical methods and that exogenous attention can skew biases without affecting overall task performance. Taken together, MAPS is an efficient method to measure how an individual's perceptual experience varies across space.

  9. CCS, locations and asynchronous transition systems

    DEFF Research Database (Denmark)

    Mukund, Madhavan; Nielsen, Mogens

    1992-01-01

    We provide a simple non-interleaved operational semantics for CCS in terms of asynchronous transition systems. We identify the concurrency present in the system in a natural way, in terms of events occurring at independent locations in the system. We extend the standard interleaving transition...... system for CCS by introducing labels on the transitions with information about the locations of events. We then show that the resulting transition system is an asynchronous transition system which has the additional property of being elementary, which means that it can also be represented by a 1-safe net....... We also introduce a notion of bisimulation on asynchronous transition systems which preserves independence. We conjecture that the induced equivalence on CCS processes coincides with the notion of location equivalence proposed by Boudol et al....

  10. "The only way is up" : Location and movement in product packaging as predictors of sensorial impressions and brand identity

    NARCIS (Netherlands)

    van Rompay, Thomas J.L.; Fransen, M.L.; Borgelink, B.; Brassett, J.; McDonnell, J.; Malpass, M.; Hekkert, P.P.M.; Ludden, G.D.S.

    2012-01-01

    Based on embodiment research linking visual-spatial design parameters to symbolic meaning portrayal, this study investigates to what extent location of imagery on product packaging and visual devices portraying movement (i.e., an arrow indicating movement along an upward-headed or downward-headed

  11. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    Science.gov (United States)

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  12. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  13. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. The visual system supports online translation invariance for object identification.

    Science.gov (United States)

    Bowers, Jeffrey S; Vankov, Ivan I; Ludwig, Casimir J H

    2016-04-01

    The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.

  15. Enhanced visual performance in obsessive compulsive personality disorder.

    Science.gov (United States)

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  16. Visualizing Debugging Activity in Source Code Repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the

  17. Directing spatial attention to locations within remembered and imagined mental representations

    Directory of Open Access Journals (Sweden)

    Simon G Gosling

    2013-04-01

    Full Text Available Spatial attention enables us to enhance the processing of items at target locations, at the expense of items presented at irrelevant locations. Many studies have explored the neural correlates of these spatial biases using event-related potentials. More recently some studies have shown that these ERP correlates are also present when subjects search visual short-term memory. This suggests firstly that this type of mental representation retains a spatial organisation that is based upon that of the original percept, and secondly that these attentional biases are flexible and can act to modulate remembered as well as perceptual representations. We aimed to test whether it was necessary for subjects to have actually seen the memoranda at those spatial locations, or whether simply imagining the spatial layout was sufficient to elicit the spatial attention effects. On some trials subjects performed a ‘visual’ search of an array held in visual short-term memory, and upon other trials subjects imagined the items at those spatial locations. We found ERP markers of spatial attention in both the memory-search and the imagery-search conditions. However, there were differences between the conditions, the effect in the memory-search began earlier and included posterior electrode sites. By contrast the ERP effect in the imagery-search condition was apparent only over fronto-central electrode sites and emerged slightly later. Nonetheless, our data demonstrate that it is not necessary for subjects to have ever seen the items at spatial locations for neural markers of spatial attention to be elicited; searching an imaginary spatial layout also triggers spatially-specific attention effects in the ERP data.

  18. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    Science.gov (United States)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and

  19. Indoor Spatial Updating with Reduced Visual Information.

    Science.gov (United States)

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  20. Indoor Spatial Updating with Reduced Visual Information.

    Directory of Open Access Journals (Sweden)

    Gordon E Legge

    Full Text Available Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms.Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135 and Severe Blur (Snellen 20/900 conditions, and a Narrow Field (8° condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating.With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition.If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  1. Brain correlates of automatic visual change detection.

    Science.gov (United States)

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. The magnetic source imaging of pattern reversal stimuli of various visual fields

    International Nuclear Information System (INIS)

    Zhang Shuqian; Ye Yufang; Sun Jilin; Wu Jie; Jia Xiuchuan; Li Sumin; Wu Jing; Zhao Huadong; Liu Lianxiang; Wu Yujin

    2006-01-01

    Objective: To have acknowledgement of characteristics of normal volunteers visual evoked fields about full field, vertical half field and quadrant field and their dipole location by magnetoencephalography. Methods: The visual evoked fields of full field, vertical half field and quadrant field were detected with 13 subjects. The latency, dipole strength and dipoles' location on x, y and z axis were analyzed. The exact locations of the dipoles were detected by overlapping on MR images. Results: The isocontour map of M100 of full field stimulation demonstrated two separate sources. The two M100 dipoles had same peak latency and different strength. And for vertical half field and quadrant field stimulation, evoked magnetic fields of M100 distributed contralateral to the stimulated side. The M100 dipoles on the z-axis to the lower quadrant field stimulation were located significantly higher than those to the upper quadrant field stimulation. The Z value median of left upper quadrant was 49.6 (35.1-72.8) mm. The Z value median of left lower quadrant was 53.5 (44.8-76.3) mm. The different of two left quadrant medians, 3.9 mm, was significant (P<0.05). The Z value median of right upper quadrant was 40.0 (34.8-44.6) mm. The Z value median of right lower quadrant was 53.8 (40.6-61.3) mm. The different of two right quadrant medians, 13.8 mm, was also significant (P<0.05). Although each of the visual evoked fields waveforms and dipole locations demonstrated large intra- and inter-individual variations, the dipole of M100 was mainly located at area Brodmann 17, which includes superior lingual gyrus, posterior cuneus-lingual gyrus and inferior cuneus gyms. Conclusion: The M100 of visual evoked fields of pattern reversal stimulation is mainly generated by the neurons of striate cortex of contralateral to the stimulated side, which is at the lateral bottom of the calcarine fissure. (authors)

  3. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    Science.gov (United States)

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  4. Visual Iconicity Across Sign Languages: Large-Scale Automated Video Analysis of Iconic Articulators and Locations

    Science.gov (United States)

    Östling, Robert; Börstell, Carl; Courtaux, Servane

    2018-01-01

    We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.

  5. Attended but unseen: visual attention is not sufficient for visual awareness.

    Science.gov (United States)

    Kentridge, R W; Nijboer, T C W; Heywood, C A

    2008-02-12

    Does any one psychological process give rise to visual awareness? One candidate is selective attention-when we attend to something it seems we always see it. But if attention can selectively enhance our response to an unseen stimulus then attention cannot be a sufficient precondition for awareness. Kentridge, Heywood & Weiskrantz [Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (1999). Attention without awareness in blindsight. Proceedings of the Royal Society of London, Series B, 266, 1805-1811; Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (2004). Spatial attention speeds discrimination without awareness in blindsight. Neuropsychologia, 42, 831-835.] demonstrated just such a dissociation in the blindsight subject GY. Here, we test whether the dissociation generalizes to the normal population. We presented observers with pairs of coloured discs, each masked by the subsequent presentation of a coloured annulus. The discs acted as primes, speeding discrimination of the colour of the annulus when they matched in colour and slowing it when they differed. We show that the location of attention modulated the size of this priming effect. However, the primes were rendered invisible by metacontrast-masking and remained unseen despite being attended. Visual attention could therefore facilitate processing of an invisible target and cannot, therefore, be a sufficient precondition for visual awareness.

  6. Designing Android Based Augmented Reality Location-Based Service Application

    Directory of Open Access Journals (Sweden)

    Alim Hardiansyah

    2018-01-01

    Full Text Available Android is an operating system for Linux based smartphone. Android provides an open platform for the developers to create their own application. The most developed and used application now is location based application. This application gives personalization service for mobile device user and is customized to their location. Location based service also gives an opportunity for the developers to develop and increase the value of service. One of the technologies that could be combined with location based application is augmented reality. Augmented reality combines the virtual world with the real one. By the assistance of augmented reality, our surrounding environment could interact in digital form. Information of objects and environment surround us could be added to the augmented reality system and presented. Based on the background, the writers tried to implement those technologies on now rapidly developing android application as a final project to achieve bachelor degree in Department of Informatics Engineering, Faculty of Information Technology and Visual Communication, Al Kamal Science and Technology Institute. This application could be functioned to locate school by using location based service technology with the assistance of navigational applications such as waze and google maps, in form of live direction process through the smartphone

  7. Visual attention capacity after right hemisphere lesions

    DEFF Research Database (Denmark)

    Habekost, Thomas; Rostrup, Egill

    2007-01-01

    Recently there has been a growing interest in visual short-term memory (VSTM) including the neural basis of the function. Processing speed, another main aspect of visual attention capacity, has received less investigation. For both cognitive functions human lesion studies are sparse. We used...... statistically to lesion location and size measured by MRI. Visual processing speed was impaired in the contralesional hemifield for most patients, but typically preserved ipsilesionally, even after large cortico-subcortical lesions. When bilateral deficits in processing speed occurred, they were related...... to damage in the right middle frontal gyrus or leukoaraiosis. The storage capacity of VSTM was also normal for most patients, but deficits were found after severe leukoaraiosis or large strokes extending deep into white matter. Thus, the study demonstrated the importance of white-matter connectivity...

  8. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    Science.gov (United States)

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  9. CellMap visualizes protein-protein interactions and subcellular localization

    Science.gov (United States)

    Dallago, Christian; Goldberg, Tatyana; Andrade-Navarro, Miguel Angel; Alanis-Lobato, Gregorio; Rost, Burkhard

    2018-01-01

    Many tools visualize protein-protein interaction (PPI) networks. The tool introduced here, CellMap, adds one crucial novelty by visualizing PPI networks in the context of subcellular localization, i.e. the location in the cell or cellular component in which a PPI happens. Users can upload images of cells and define areas of interest against which PPIs for selected proteins are displayed (by default on a cartoon of a cell). Annotations of localization are provided by the user or through our in-house database. The visualizer and server are written in JavaScript, making CellMap easy to customize and to extend by researchers and developers. PMID:29497493

  10. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  11. Visually induced nausea causes characteristic changes in cerebral, autonomic and endocrine function in humans.

    Science.gov (United States)

    Farmer, Adam D; Ban, Vin F; Coen, Steven J; Sanger, Gareth J; Barker, Gareth J; Gresty, Michael A; Giampietro, Vincent P; Williams, Steven C; Webb, Dominic L; Hellström, Per M; Andrews, Paul L R; Aziz, Qasim

    2015-03-01

    An integrated understanding of the physiological mechanisms involved in the genesis of nausea remains lacking. We aimed to describe the psychophysiological changes accompanying visually induced motion sickness, using a motion video, hypothesizing that differences would be evident between subjects who developed nausea in comparison to those who did not. A motion, or a control, stimulus was presented to 98 healthy subjects in a randomized crossover design. Validated questionnaires and a visual analogue scale (VAS) were used for the assessment of anxiety and nausea. Autonomic and electrogastrographic activity were measured at baseline and continuously thereafter. Plasma vasopressin and ghrelin were measured in response to the motion video. Subjects were stratified into quartiles based on VAS nausea scores, with the upper and lower quartiles considered to be nausea sensitive and resistant, respectively. Twenty-eight subjects were exposed to the motion video during functional neuroimaging. During the motion video, nausea-sensitive subjects had lower normogastria/tachygastria ratio and cardiac vagal tone but higher cardiac sympathetic index in comparison to the control video. Furthermore, nausea-sensitive subjects had decreased plasma ghrelin and demonstrated increased activity of the left anterior cingulate cortex. Nausea VAS scores correlated positively with plasma vasopressin and left inferior frontal and middle occipital gyri activity and correlated negatively with plasma ghrelin and brain activity in the right cerebellar tonsil, declive, culmen, lingual gyrus and cuneus. This study demonstrates that the subjective sensation of nausea is associated with objective changes in autonomic, endocrine and brain networks, and thus identifies potential objective biomarkers and targets for therapeutic interventions. © 2015 The Authors. The Journal of Physiology © 2015 The Physiological Society.

  12. TE-Locate: A Tool to Locate and Group Transposable Element Occurrences Using Paired-End Next-Generation Sequencing Data.

    Science.gov (United States)

    Platzer, Alexander; Nizhynska, Viktoria; Long, Quan

    2012-09-12

    Transposable elements (TEs) are common mobile DNA elements present in nearly all genomes. Since the movement of TEs within a genome can sometimes have phenotypic consequences, an accurate report of TE actions is desirable. To this end, we developed TE-Locate, a computational tool that uses paired-end reads to identify the novel locations of known TEs. TE-Locate can utilize either a database of TE sequences, or annotated TEs within the reference sequence of interest. This makes TE-Locate useful in the search for any mobile sequence, including retrotransposed gene copies. One major concern is to act on the correct hierarchy level, thereby avoiding an incorrect calling of a single insertion as multiple events of TEs with high sequence similarity. We used the (super)family level, but TE-Locate can also use any other level, right down to the individual transposable element. As an example of analysis with TE-Locate, we used the Swedish population in the 1,001 Arabidopsis genomes project, and presented the biological insights gained from the novel TEs, inducing the association between different TE superfamilies. The program is freely available, and the URL is provided in the end of the paper.

  13. TE-Locate: A Tool to Locate and Group Transposable Element Occurrences Using Paired-End Next-Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Quan Long

    2012-09-01

    Full Text Available Transposable elements (TEs are common mobile DNA elements present in nearly all genomes. Since the movement of TEs within a genome can sometimes have phenotypic consequences, an accurate report of TE actions is desirable. To this end, we developed TE-Locate, a computational tool that uses paired-end reads to identify the novel locations of known TEs. TE-Locate can utilize either a database of TE sequences, or annotated TEs within the reference sequence of interest. This makes TE-Locate useful in the search for any mobile sequence, including retrotransposed gene copies. One major concern is to act on the correct hierarchy level, thereby avoiding an incorrect calling of a single insertion as multiple events of TEs with high sequence similarity. We used the (superfamily level, but TE-Locate can also use any other level, right down to the individual transposable element. As an example of analysis with TE-Locate, we used the Swedish population in the 1,001 Arabidopsis genomes project, and presented the biological insights gained from the novel TEs, inducing the association between different TE superfamilies. The program is freely available, and the URL is provided in the end of the paper.

  14. Sex differences in visual-spatial working memory: A meta-analysis.

    Science.gov (United States)

    Voyer, Daniel; Voyer, Susan D; Saint-Aubin, Jean

    2017-04-01

    Visual-spatial working memory measures are widely used in clinical and experimental settings. Furthermore, it has been argued that the male advantage in spatial abilities can be explained by a sex difference in visual-spatial working memory. Therefore, sex differences in visual-spatial working memory have important implication for research, theory, and practice, but they have yet to be quantified. The present meta-analysis quantified the magnitude of sex differences in visual-spatial working memory and examined variables that might moderate them. The analysis used a set of 180 effect sizes from healthy males and females drawn from 98 samples ranging in mean age from 3 to 86 years. Multilevel meta-analysis was used on the overall data set to account for non-independent effect sizes. The data also were analyzed in separate task subgroups by means of multilevel and mixed-effects models. Results showed a small but significant male advantage (mean d = 0.155, 95 % confidence interval = 0.087-0.223). All the tasks produced a male advantage, except for memory for location, where a female advantage emerged. Age of the participants was a significant moderator, indicating that sex differences in visual-spatial working memory appeared first in the 13-17 years age group. Removing memory for location tasks from the sample affected the pattern of significant moderators. The present results indicate a male advantage in visual-spatial working memory, although age and specific task modulate the magnitude and direction of the effects. Implications for clinical applications, cognitive model building, and experimental research are discussed.

  15. Attention to multiple locations is limited by spatial working memory capacity.

    Science.gov (United States)

    Close, Alex; Sapir, Ayelet; Burnett, Katherine; d'Avossa, Giovanni

    2014-08-21

    What limits the ability to attend several locations simultaneously? There are two possibilities: Either attention cannot be divided without incurring a cost, or spatial memory is limited and observers forget which locations to monitor. We compared motion discrimination when attention was directed to one or multiple locations by briefly presented central cues. The cues were matched for the amount of spatial information they provided. Several random dot kinematograms (RDKs) followed the spatial cues; one of them contained task-relevant, coherent motion. When four RDKs were presented, discrimination accuracy was identical when one and two locations were indicated by equally informative cues. However, when six RDKs were presented, discrimination accuracy was higher following one rather than multiple location cues. We examined whether memory of the cued locations was diminished under these conditions. Recall of the cued locations was tested when participants attended the cued locations and when they did not attend the cued locations. Recall was inaccurate only when the cued locations were attended. Finally, visually marking the cued locations, following one and multiple location cues, equalized discrimination performance, suggesting that participants could attend multiple locations when they did not have to remember which ones to attend. We conclude that endogenously dividing attention between multiple locations is limited by inaccurate recall of the attended locations and that attention poses separate demands on the same central processes used to remember spatial information, even when the locations attended and those held in memory are the same. © 2014 ARVO.

  16. Quantitatively Measured Anatomic Location and Volume of Optic Disc Drusen: An Enhanced Depth Imaging Optical Coherence Tomography Study

    DEFF Research Database (Denmark)

    Malmqvist, Lasse; Lindberg, Anne-Sofie Wessel; Dahl, Vedrana Andersen

    2017-01-01

    function using automated perimetric mean deviation (MD) and multifocal visual evoked potentials. Increased age (P = 0.015); larger ODD volume (P = 0.002); and more superficial anatomic ODD location (P = 0.007) were found in patients with ODD visible by ophthalmoscopy compared to patients with buried ODD.......025) and had a higher effect on MD when compared to retinal nerve fiber layer thickness. Large ODD volume is associated with optic nerve dysfunction. The worse visual field defects associated with visible ODD should only be ascribed to larger ODD volume and not to a more superficial anatomic ODD location....

  17. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Short-term visual memory for location in depth: A U-shaped function of time.

    Science.gov (United States)

    Reeves, Adam; Lei, Quan

    2017-10-01

    Short-term visual memory was studied by displaying arrays of four or five numerals, each numeral in its own depth plane, followed after various delays by an arrow cue shown in one of the depth planes. Subjects reported the numeral at the depth cued by the arrow. Accuracy fell with increasing cue delay for the first 500 ms or so, and then recovered almost fully. This dipping pattern contrasts with the usual iconic decay observed for memory traces. The dip occurred with or without a verbal or color-shape retention load on working memory. In contrast, accuracy did not change with delay when a tonal cue replaced the arrow cue. We hypothesized that information concerning the depths of the numerals decays over time in sensory memory, but that cued recall is aided later on by transfer to a visual memory specialized for depth. This transfer is sufficiently rapid with a tonal cue to compensate for the sensory decay, but it is slowed by the need to tag the arrow cue's depth relative to the depths of the numerals, exposing a dip when sensation has decayed and transfer is not yet complete. A model with a fixed rate of sensory decay and varied transfer rates across individuals captures the dip as well as the cue modality effect.

  19. Probability cueing of distractor locations: both intertrial facilitation and statistical learning mediate interference reduction.

    Science.gov (United States)

    Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael

    2014-01-01

    Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.

  20. Visual management of large scale data mining projects.

    Science.gov (United States)

    Shah, I; Hunter, L

    2000-01-01

    This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.

  1. Visualizing Debugging Activity in Source Code Repositories

    OpenAIRE

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the bug evolution in time.

  2. Accumulation and Decay of Visual Capture and the Ventriloquism Aftereffect Caused by Brief Audio-Visual Disparities

    Science.gov (United States)

    Bosen, Adam K.; Fleming, Justin T.; Allen, Paul D.; O’Neill, William E.; Paige, Gary D.

    2016-01-01

    Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory-visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a ‘sample-and-hold’ process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a ‘leaky integrator’ process that accumulates with experience and decays with time to compensate for cross-modal disparities. PMID:27837258

  3. Visual areas become less engaged in associative recall following memory stabilization.

    Science.gov (United States)

    Nieuwenhuis, Ingrid L C; Takashima, Atsuko; Oostenveld, Robert; Fernández, Guillén; Jensen, Ole

    2008-04-15

    Numerous studies have focused on changes in the activity in the hippocampus and higher association areas with consolidation and memory stabilization. Even though perceptual areas are engaged in memory recall, little is known about how memory stabilization is reflected in those areas. Using magnetoencephalography (MEG) we investigated changes in visual areas with memory stabilization. Subjects were trained on associating a face to one of eight locations. The first set of associations ('stabilized') was learned in three sessions distributed over a week. The second set ('labile') was learned in one session just prior to the MEG measurement. In the recall session only the face was presented and subjects had to indicate the correct location using a joystick. The MEG data revealed robust gamma activity during recall, which started in early visual cortex and propagated to higher visual and parietal brain areas. The occipital gamma power was higher for the labile than the stabilized condition (time=0.65-0.9 s). Also the event-related field strength was higher during recall of labile than stabilized associations (time=0.59-1.5 s). We propose that recall of the spatial associations prior to memory stabilization involves a top-down process relying on reconstructing learned representations in visual areas. This process is reflected in gamma band activity consistent with the notion that neuronal synchronization in the gamma band is required for visual representations. More direct synaptic connections are formed with memory stabilization, thus decreasing the dependence on visual areas.

  4. Structural reorganization of the early visual cortex following Braille training in sighted adults.

    Science.gov (United States)

    Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Zimmermann, Maria; Jednoróg, Katarzyna; Marchewka, Artur; Szwed, Marcin

    2017-12-12

    Training can induce cross-modal plasticity in the human cortex. A well-known example of this phenomenon is the recruitment of visual areas for tactile and auditory processing. It remains unclear to what extent such plasticity is associated with changes in anatomy. Here we enrolled 29 sighted adults into a nine-month tactile Braille-reading training, and used voxel-based morphometry and diffusion tensor imaging to describe the resulting anatomical changes. In addition, we collected resting-state fMRI data to relate these changes to functional connectivity between visual and somatosensory-motor cortices. Following Braille-training, we observed substantial grey and white matter reorganization in the anterior part of early visual cortex (peripheral visual field). Moreover, relative to its posterior, foveal part, the peripheral representation of early visual cortex had stronger functional connections to somatosensory and motor cortices even before the onset of training. Previous studies show that the early visual cortex can be functionally recruited for tactile discrimination, including recognition of Braille characters. Our results demonstrate that reorganization in this region induced by tactile training can also be anatomical. This change most likely reflects a strengthening of existing connectivity between the peripheral visual cortex and somatosensory cortices, which suggests a putative mechanism for cross-modal recruitment of visual areas.

  5. A visualization study of flow-induced acoustic resonance in a branched pipe

    International Nuclear Information System (INIS)

    Li, Yanrong; Someya, Satoshi; Okamoto, Koji

    2008-01-01

    Systems with closed side-branches are liable to an excitation of sound, as called cavity tones. It may occur in pipe branches leading to safety valves or to boiler relief valves. The outbreak mechanism of the cavity tone has been known by phase-averaged measurement in previous researches, while the relation between sound propagation and flow field is still unclear due to the difficulty of detecting instantaneous pressure field. High time-resolved PIV has a possibility to analyze the pressure field and the relation mentioned above. In this report, flow-induced acoustic resonances of piping system containing closed side-branches were investigated experimentally. A High-Time-Resolved PIV technique was applied to measure a gas-flow in a cavity-tone. Air flow containing an oil mist as tracer particles was measured using a high frequency pulse laser and a high-speed camera. The present investigation on the coaxial closed side-branches is the first rudimentary study to measure the flow field two-dimensionally and simultaneously with the pressure measurement at multi-points and to visualize the fluid flow in the cross-section by using PIV. The fluid flows at different points in the cavity interact with some phase differences and the relation should be clarified. (author)

  6. Comparison of Brain Activation Images Associated with Sexual Arousal Induced by Visual Stimulation and SP6 Acupuncture: fMRI at 3 Tesla

    International Nuclear Information System (INIS)

    Choi, Nam Gil; Han, Jae Bok; Jang, Seong Joo

    2009-01-01

    This study was performed not only to compare the brain activation regions associated with sexual arousal induced by visual stimulation and SP6 acupuncture, but also to evaluate its differential neuro-anatomical mechanism in healthy women using functional magnetic resonance imaging (fMRI) at 3 Tesla (T). A total of 21 healthy right-handed female volunteers (mean age 22 years, range 19 to 32) underwent fMRI on a 3T MR scanner. The stimulation paradigm for sexual arousal consisted of two alternating periods of rest and activation. It began with a 1-minute rest period, 3 minutes of stimulation with either of an erotic video film or SP6 acupuncture, followed by 1-minute rest. In addition, a comparative study on the brain activation patterns between an acupoint and a shampoint nearby GB37 was performed. The fMRI data were obtained from 20 slices parallel to the AC-PC line on an axial plane, giving a total of 2,000 images. The mean activation maps were constructed and analyzed by using the statistical parametric mapping (SPM99) software. As comparison with the shampoint, the acupoint showed 5 times and 2 times higher activities in the neocortex and limbic system, respectively. Note that brain activation in response to stimulation with the shampoint was not observed in the regions including the HTHL in the diencephalon, GLO and AMYG in the basal ganglia, and SMG in the parietal lobe. In the comparative study of visual stimulation vs. SP6 acupuncture, the mean activation ratio of stimulus was not significantly different to each other in both the neocortex and the limbic system (p < 0.05). The mean activities induced by both stimuli were not significantly different in the neocortex, whereas the acupunctural stimulation showed higher activity in the limbic system (p < 0.05). This study compared the differential brain activation patterns and the neural mechanisms for sexual arousal, which were induced by visual stimulation and SP6 acupuncture by using 3T fMRI. These findings

  7. Comparison of Brain Activation Images Associated with Sexual Arousal Induced by Visual Stimulation and SP6 Acupuncture: fMRI at 3 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Nam Gil [Dept. of Radiology, Chonnam National University Hospital, Gwangju (Korea, Republic of); Han, Jae Bok; Jang, Seong Joo [Dept. of Radiology, Dongshin University, Naju (Korea, Republic of)

    2009-06-15

    This study was performed not only to compare the brain activation regions associated with sexual arousal induced by visual stimulation and SP6 acupuncture, but also to evaluate its differential neuro-anatomical mechanism in healthy women using functional magnetic resonance imaging (fMRI) at 3 Tesla (T). A total of 21 healthy right-handed female volunteers (mean age 22 years, range 19 to 32) underwent fMRI on a 3T MR scanner. The stimulation paradigm for sexual arousal consisted of two alternating periods of rest and activation. It began with a 1-minute rest period, 3 minutes of stimulation with either of an erotic video film or SP6 acupuncture, followed by 1-minute rest. In addition, a comparative study on the brain activation patterns between an acupoint and a shampoint nearby GB37 was performed. The fMRI data were obtained from 20 slices parallel to the AC-PC line on an axial plane, giving a total of 2,000 images. The mean activation maps were constructed and analyzed by using the statistical parametric mapping (SPM99) software. As comparison with the shampoint, the acupoint showed 5 times and 2 times higher activities in the neocortex and limbic system, respectively. Note that brain activation in response to stimulation with the shampoint was not observed in the regions including the HTHL in the diencephalon, GLO and AMYG in the basal ganglia, and SMG in the parietal lobe. In the comparative study of visual stimulation vs. SP6 acupuncture, the mean activation ratio of stimulus was not significantly different to each other in both the neocortex and the limbic system (p < 0.05). The mean activities induced by both stimuli were not significantly different in the neocortex, whereas the acupunctural stimulation showed higher activity in the limbic system (p < 0.05). This study compared the differential brain activation patterns and the neural mechanisms for sexual arousal, which were induced by visual stimulation and SP6 acupuncture by using 3T fMRI. These findings

  8. Visualizing Tensor Normal Distributions at Multiple Levels of Detail.

    Science.gov (United States)

    Abbasloo, Amin; Wiens, Vitalis; Hermann, Max; Schultz, Thomas

    2016-01-01

    Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.

  9. Hemispheric differences in electrical and hemodynamic responses during hemifield visual stimulation with graded contrasts.

    Science.gov (United States)

    Si, Juanning; Zhang, Xin; Zhang, Yujin; Jiang, Tianzi

    2017-04-01

    A multimodal neuroimaging technique based on electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) was used with horizontal hemifield visual stimuli with graded contrasts to investigate the retinotopic mapping more fully as well as to explore hemispheric differences in neuronal activity, the hemodynamic response, and the neurovascular coupling relationship in the visual cortex. The fNIRS results showed the expected activation over the contralateral hemisphere for both the left and right hemifield visual stimulations. However, the EEG results presented a paradoxical lateralization, with the maximal response located over the ipsilateral hemisphere but with the polarity inversed components located over the contralateral hemisphere. Our results suggest that the polarity inversion as well as the latency advantage over the contralateral hemisphere cause the amplitude of the VEP over the contralateral hemisphere to be smaller than that over the ipsilateral hemisphere. Both the neuronal and hemodynamic responses changed logarithmically with the level of contrast in the hemifield visual stimulations. Moreover, the amplitudes and latencies of the visual evoked potentials (VEPs) were linearly correlated with the hemodynamic responses despite differences in the slopes.

  10. Influence of front light configuration on the visual conspicuity of motorcycles.

    Science.gov (United States)

    Pinto, Maria; Cavallo, Viola; Saint-Pierre, Guillaume

    2014-01-01

    A recent study (Cavallo and Pinto, 2012) showed that daytime running lights (DRLs) on cars create "visual noise" that interferes with the lighting of motorcycles and affects their visual conspicuity. In the present experiment, we tested three conspicuity enhancements designed to improve motorcycle detectability in a car-DRL environment: a triangle configuration (a central headlight plus two lights located on the rearview mirrors), a helmet configuration (a light located on the motorcyclist's helmet in addition to the central headlight), and a single central yellow headlight. These three front-light configurations were evaluated in comparison to the standard configuration (a single central white headlight). Photographs representing complex urban traffic scenes were presented briefly (for 250ms). The results revealed better motorcycle-detection performance for both the yellow headlight and the helmet configuration than for the standard configuration. The findings suggest some avenues for defining a new visual signature for motorcycles in car-DRL environments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. The tremorolytic action of beta-adrenoceptor blockers in essential, physiological and isoprenaline-induced tremor is mediated by beta-adrenoceptors located in a deep peripheral compartment.

    Science.gov (United States)

    Abila, B; Wilson, J F; Marshall, R W; Richens, A

    1985-10-01

    The effects of intravenous propranolol 100 micrograms kg-1, sotalol 500 micrograms kg-1, timolol 7.8 micrograms kg-1, atenolol 125 micrograms kg-1 and placebo on essential, physiological and isoprenaline-induced tremor were studied. These beta-adrenoceptor blocker doses produced equal reduction of standing-induced tachycardia in essential tremor patients. Atenolol produced significantly less reduction of essential and isoprenaline-induced tremor than the non-selective drugs, confirming the importance of beta 2-adrenoceptor blockade in these effects. Propranolol and sotalol produced equal maximal inhibition of isoprenaline-induced tremor but propranolol was significantly more effective in reducing essential tremor. The rate of development of the tremorolytic effect was similar in essential, physiological and isoprenaline-induced tremors but all tremor responses developed significantly more slowly than the heart rate responses. It is proposed that these results indicate that the tremorolytic activity of beta-adrenoceptor blockers in essential, physiological and isoprenaline-induced tremor is exerted via the same beta 2-adrenoceptors located in a deep peripheral compartment which is thought to be in the muscle spindles.

  12. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  13. Spatial constancy of attention across eye movements is mediated by the presence of visual objects.

    Science.gov (United States)

    Lisi, Matteo; Cavanagh, Patrick; Zorzi, Marco

    2015-05-01

    Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results.

  14. Perceptual learning increases the strength of the earliest signals in visual cortex.

    Science.gov (United States)

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  15. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  16. A Visual Analytics Approach for Station-Based Air Quality Data

    Directory of Open Access Journals (Sweden)

    Yi Du

    2016-12-01

    Full Text Available With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  17. A Visual Analytics Approach for Station-Based Air Quality Data.

    Science.gov (United States)

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  18. Allocentrically implied target locations are updated in an eye-centred reference frame.

    Science.gov (United States)

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  20. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only