WorldWideScience

Sample records for integration explains visual

  1. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  2. Integration of visual and inertial cues in perceived heading of self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Weesie, H.M.; Werkhoven, P.J.; Groen, E.L.

    2010-01-01

    In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants

  3. Executive functions as predictors of visual-motor integration in children with intellectual disability.

    Science.gov (United States)

    Memisevic, Haris; Sinanovic, Osman

    2013-12-01

    The goal of this study was to assess the relationship between visual-motor integration and executive functions, and in particular, the extent to which executive functions can predict visual-motor integration skills in children with intellectual disability. The sample consisted of 90 children (54 boys, 36 girls; M age = 11.3 yr., SD = 2.7, range 7-15) with intellectual disabilities of various etiologies. The measure of executive functions were 8 subscales of the Behavioral Rating Inventory of Executive Function (BRIEF) consisting of Inhibition, Shifting, Emotional Control, Initiating, Working memory, Planning, Organization of material, and Monitoring. Visual-motor integration was measured with the Acadia test of visual-motor integration (VMI). Regression analysis revealed that BRIEF subscales explained 38% of the variance in VMI scores. Of all the BRIEF subscales, only two were statistically significant predictors of visual-motor integration: Working memory and Monitoring. Possible implications of this finding are further elaborated.

  4. When apperceptive agnosia is explained by a deficit of primary visual processing.

    Science.gov (United States)

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Visual feature integration theory: past, present, and future.

    Science.gov (United States)

    Quinlan, Philip T

    2003-09-01

    Visual feature integration theory was one of the most influential theories of visual information processing in the last quarter of the 20th century. This article provides an exposition of the theory and a review of the associated data. In the past much emphasis has been placed on how the theory explains performance in various visual search tasks. The relevant literature is discussed and alternative accounts are described. Amendments to the theory are also set out. Many other issues concerning internal processes and representations implicated by the theory are reviewed. The article closes with a synopsis of what has been learned from consideration of the theory, and it is concluded that some of the issues may remain intractable unless appropriate neuroscientific investigations are carried out.

  6. Explaining seeing? Disentangling qualia from perceptual organization.

    Science.gov (United States)

    Ibáñez, Agustin; Bekinschtein, Tristan

    2010-09-01

    Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.

  7. Integration of Visual and Vestibular Information Used to Discriminate Rotational Self-Motion

    Directory of Open Access Journals (Sweden)

    Florian Soyka

    2011-10-01

    Full Text Available Do humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli? Recent studies are inconclusive as to whether such integration occurs when discriminating heading direction. In the present study eight participants were consecutively rotated twice (2s sinusoidal acceleration on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The visual stimulus was a video of a moving stripe pattern, synchronized with the inertial motion. Peak acceleration of the reference stimulus was varied and participants reported which rotation was perceived as faster. Just-noticeable differences (JND were estimated by fitting psychometric functions. The visual-vestibular JND measurements are too high compared to the predictions based on the unimodal JND estimates and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.

  8. A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.

    Science.gov (United States)

    Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E

    2018-06-20

    Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Implicit integration in a case of integrative visual agnosia.

    Science.gov (United States)

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  10. Spatial integration in mouse primary visual cortex.

    Science.gov (United States)

    Vaiceliunaite, Agne; Erisken, Sinem; Franzen, Florian; Katzner, Steffen; Busse, Laura

    2013-08-01

    Responses of many neurons in primary visual cortex (V1) are suppressed by stimuli exceeding the classical receptive field (RF), an important property that might underlie the computation of visual saliency. Traditionally, it has proven difficult to disentangle the underlying neural circuits, including feedforward, horizontal intracortical, and feedback connectivity. Since circuit-level analysis is particularly feasible in the mouse, we asked whether neural signatures of spatial integration in mouse V1 are similar to those of higher-order mammals and investigated the role of parvalbumin-expressing (PV+) inhibitory interneurons. Analogous to what is known from primates and carnivores, we demonstrate that, in awake mice, surround suppression is present in the majority of V1 neurons and is strongest in superficial cortical layers. Anesthesia with isoflurane-urethane, however, profoundly affects spatial integration: it reduces the laminar dependency, decreases overall suppression strength, and alters the temporal dynamics of responses. We show that these effects of brain state can be parsimoniously explained by assuming that anesthesia affects contrast normalization. Hence, the full impact of suppressive influences in mouse V1 cannot be studied under anesthesia with isoflurane-urethane. To assess the neural circuits of spatial integration, we targeted PV+ interneurons using optogenetics. Optogenetic depolarization of PV+ interneurons was associated with increased RF size and decreased suppression in the recorded population, similar to effects of lowering stimulus contrast, suggesting that PV+ interneurons contribute to spatial integration by affecting overall stimulus drive. We conclude that the mouse is a promising model for circuit-level mechanisms of spatial integration, which relies on the combined activity of different types of inhibitory interneurons.

  11. Aspects of ontology visualization and integration

    NARCIS (Netherlands)

    Dmitrieva, Joelia Borisovna

    2011-01-01

    In this thesis we will describe and discuss methodologies for ontology visualization and integration. Two visualization methods will be elaborated. In one method the ontology is visualized with the node-link technique, and with the other method the ontology is visualized with the containment

  12. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  13. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  14. Visual-motor integration functioning in a South African middle ...

    African Journals Online (AJOL)

    Visual-motor integration functioning has been identified as playing an integral role in different aspects of a child's development. Sensory-motor development is not only foundational to the physical maturation process, but is also imperative for progress with formal learning activities. Deficits in visual-motor integration have ...

  15. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  16. Learning STEM Through Integrative Visual Representations

    Science.gov (United States)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial

  17. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  18. Does cultural integration explain a mental health advantage for adolescents?

    Science.gov (United States)

    Bhui, Kamaldeep S; Lenguerrand, Erik; Maynard, Maria J; Stansfeld, Stephen A; Harding, Seeromanie

    2012-06-01

    A mental health advantage has been observed among adolescents in urban areas. This prospective study tests whether cultural integration measured by cross-cultural friendships explains a mental health advantage for adolescents. A prospective cohort of adolescents was recruited from 51 secondary schools in 10 London boroughs. Cultural identity was assessed by friendship choices within and across ethnic groups. Cultural integration is one of four categories of cultural identity. Using gender-specific linear-mixed models we tested whether cultural integration explained a mental health advantage, and whether gender and age were influential. Demographic and other relevant factors, such as ethnic group, socio-economic status, family structure, parenting styles and perceived racism were also measured and entered into the models. Mental health was measured by the Strengths and Difficulties Questionnaire as a 'total difficulties score' and by classification as a 'probable clinical case'. A total of 6643 pupils in first and second years of secondary school (ages 11-13 years) took part in the baseline survey (2003/04) and 4785 took part in the follow-up survey in 2005-06. Overall mental health improved with age, more so in male rather than female students. Cultural integration (friendships with own and other ethnic groups) was associated with the lowest levels of mental health problems especially among male students. This effect was sustained irrespective of age, ethnicity and other potential explanatory variables. There was a mental health advantage among specific ethnic groups: Black Caribbean and Black African male students (Nigerian/Ghanaian origin) and female Indian students. This was not fully explained by cultural integration, although cultural integration was independently associated with better mental health. Cultural integration was associated with better mental health, independent of the mental health advantage found among specific ethnic groups: Black Caribbean and

  19. Does cultural integration explain a mental health advantage for adolescents?

    Science.gov (United States)

    Bhui, Kamaldeep S; Lenguerrand, Erik; Maynard, Maria J; Stansfeld, Stephen A; Harding, Seeromanie

    2012-01-01

    Background A mental health advantage has been observed among adolescents in urban areas. This prospective study tests whether cultural integration measured by cross-cultural friendships explains a mental health advantage for adolescents. Methods A prospective cohort of adolescents was recruited from 51 secondary schools in 10 London boroughs. Cultural identity was assessed by friendship choices within and across ethnic groups. Cultural integration is one of four categories of cultural identity. Using gender-specific linear-mixed models we tested whether cultural integration explained a mental health advantage, and whether gender and age were influential. Demographic and other relevant factors, such as ethnic group, socio-economic status, family structure, parenting styles and perceived racism were also measured and entered into the models. Mental health was measured by the Strengths and Difficulties Questionnaire as a ‘total difficulties score’ and by classification as a ‘probable clinical case’. Results A total of 6643 pupils in first and second years of secondary school (ages 11–13 years) took part in the baseline survey (2003/04) and 4785 took part in the follow-up survey in 2005–06. Overall mental health improved with age, more so in male rather than female students. Cultural integration (friendships with own and other ethnic groups) was associated with the lowest levels of mental health problems especially among male students. This effect was sustained irrespective of age, ethnicity and other potential explanatory variables. There was a mental health advantage among specific ethnic groups: Black Caribbean and Black African male students (Nigerian/Ghanaian origin) and female Indian students. This was not fully explained by cultural integration, although cultural integration was independently associated with better mental health. Conclusions Cultural integration was associated with better mental health, independent of the mental health advantage

  20. [To explain is to narrate. How to visualize scientific data].

    Science.gov (United States)

    Hawtin, Nigel

    2014-01-01

    When you try to appeal a vast ranging audience, as it occurs at the New Scientist that addresses scientists as well as the general public, your scientific visual explainer must be succinct, clear, accurate and easily understandable. In order to reach this goal, your message should provide only the main data, the ones that allow you to balance information and clarity: information should be put into context and all the extra details should be cut down. It is very important, then, to know well both your audience and the subject you are going to describe, as graphic masters of the past, like William Playfair and Charles Minard, have taught us. Moreover, you should try to engage your reader connecting the storytelling power of words and the driving force of the graphics: colours, visual elements, typography. To be effective, in fact, an infographic should not only be truthful and functional, but also elegant, having style and legibility.

  1. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  2. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  3. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  4. Exploring the Link between Visual Perception, Visual-Motor Integration, and Reading in Normal Developing and Impaired Children using DTVP-2.

    Science.gov (United States)

    Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie

    2017-08-01

    Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Keeping in Touch With the Visual System: Spatial Alignment and Multisensory Integration of Visual-Somatosensory Inputs

    Directory of Open Access Journals (Sweden)

    Jeannette Rose Mahoney

    2015-08-01

    Full Text Available Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms. In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.

  6. Integrated Data Visualization and Virtual Reality Tool

    Science.gov (United States)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  7. Visual Learning in Application of Integration

    Science.gov (United States)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  8. Collinear facilitation and contour integration in autism: evidence for atypical visual integration.

    Science.gov (United States)

    Jachim, Stephen; Warren, Paul A; McLoughlin, Niall; Gowen, Emma

    2015-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impaired social interaction, atypical communication and a restricted repertoire of interests and activities. Altered sensory and perceptual experiences are also common, and a notable perceptual difference between individuals with ASD and controls is their superior performance in visual tasks where it may be beneficial to ignore global context. This superiority may be the result of atypical integrative processing. To explore this claim we investigated visual integration in adults with ASD (diagnosed with Asperger's Syndrome) using two psychophysical tasks thought to rely on integrative processing-collinear facilitation and contour integration. We measured collinear facilitation at different flanker orientation offsets and contour integration for both open and closed contours. Our results indicate that compared to matched controls, ASD participants show (i) reduced collinear facilitation, despite equivalent performance without flankers; and (ii) less benefit from closed contours in contour integration. These results indicate weaker visuospatial integration in adults with ASD and suggest that further studies using these types of paradigms would provide knowledge on how contextual processing is altered in ASD.

  9. Collinear facilitation and contour integration in autism: evidence for atypical visual integration

    Directory of Open Access Journals (Sweden)

    Stephen eJachim

    2015-03-01

    Full Text Available Autism spectrum disorder (ASD is a neurodevelopmental disorder characterized by impaired social interaction, atypical communication and a restricted repertoire of interests and activities. Altered sensory and perceptual experiences are also common, and a notable perceptual difference between individuals with ASD and controls is their superior performance in visual tasks where it may be beneficial to ignore global context. This superiority may be the result of atypical integrative processing. To explore this claim we investigated visual integration in adults with ASD (diagnosed with Asperger’s Syndrome using two psychophysical tasks thought to rely on integrative processing - collinear facilitation and contour integration. We measured collinear facilitation at different flanker orientation offsets and contour integration for both open and closed contours. Our results indicate that compared to matched controls, ASD participants show (i reduced collinear facilitation, despite equivalent performance without flankers and (ii less benefit from closed contours in contour integration. These results indicate weaker visuospatial integration in adults with ASD and suggest that further studies using these types of paradigms would provide knowledge on how contextual processing is altered in ASD.

  10. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  11. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  12. Predictors of Visual-Motor Integration in Children with Intellectual Disability

    Science.gov (United States)

    Memisevic, Haris; Sinanovic, Osman

    2012-01-01

    The aim of this study was to assess the influence of sex, age, level and etiology of intellectual disability on visual-motor integration in children with intellectual disability. The sample consisted of 90 children with intellectual disability between 7 and 15 years of age. Visual-motor integration was measured using the Acadia test of…

  13. Spatial integration in mouse primary visual cortex

    OpenAIRE

    Vaiceliunaite, Agne; Erisken, Sinem; Franzen, Florian; Katzner, Steffen; Busse, Laura

    2013-01-01

    Responses of many neurons in primary visual cortex (V1) are suppressed by stimuli exceeding the classical receptive field (RF), an important property that might underlie the computation of visual saliency. Traditionally, it has proven difficult to disentangle the underlying neural circuits, including feedforward, horizontal intracortical, and feedback connectivity. Since circuit-level analysis is particularly feasible in the mouse, we asked whether neural signatures of spatial integration in ...

  14. Cognitive and Developmental Influences in Visual-Motor Integration Skills in Young Children

    Science.gov (United States)

    Decker, Scott L.; Englund, Julia A.; Carboni, Jessica A.; Brooks, Janell H.

    2011-01-01

    Measures of visual-motor integration skills continue to be widely used in psychological assessments with children. However, the construct validity of many visual-motor integration measures remains unclear. In this study, we investigated the relative contributions of maturation and cognitive skills to the development of visual-motor integration…

  15. Does linear separability really matter? Complex visual search is explained by simple search

    Science.gov (United States)

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  16. Explaining Academic Progress via Combining Concepts of Integration Theory and Rational Choice Theory.

    Science.gov (United States)

    Beekhoven, S.; De Jong, U.; Van Hout, H.

    2002-01-01

    Compared elements of rational choice theory and integration theory on the basis of their power to explain variance in academic progress. Asserts that the concepts should be combined, and the distinction between social and academic integration abandoned. Empirical analysis showed that an extended model, comprising both integration and rational…

  17. Visual integration dysfunction in schizophrenia arises by the first psychotic episode and worsens with illness duration.

    Science.gov (United States)

    Keane, Brian P; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M

    2016-05-01

    Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on the number of noise elements co-presented with the shape. For half of the experiment, the entire display was scaled down in size to produce a high spatial frequency (HSF) condition, which has been shown to worsen patient integration deficits. Catch trials-in which the circular target appeared without noise-were also added so as to confirm that subjects were paying adequate attention. We found that controls integrated contours under noisier conditions than FEs, who, in turn, integrated better than SZs. These differences, which were at times large in magnitude (d = 1.7), clearly emerged only for HSF displays. Catch trial accuracy was above 95% for each group and could not explain the foregoing differences. Prolonged illness duration predicted poorer HSF integration across patients, but age had little effect on controls, indicating that the former factor was driving the effect in patients. Taken together, a brief psychophysical task efficiently demonstrates large visual integration impairments in schizophrenia. The deficit arises by the first psychotic episode, worsens with illness duration, and may serve as a biomarker of illness progression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Integrated Visualization of Multi-sensor Ocean Data across the Web

    Science.gov (United States)

    Platt, F.; Thompson, C. K.; Roberts, J. T.; Tsontos, V. M.; Hin Lam, C.; Arms, S. C.; Quach, N.

    2017-12-01

    Whether for research or operational decision support, oceanographic applications rely on the visualization of multivariate in situ and remote sensing data as an integral part of analysis workflows. However, given their inherently 3D-spatial and temporally dynamic nature, the visual representation of marine in situ data in particular poses a challenge. The Oceanographic In situ data Interoperability Project (OIIP) is a collaborative project funded under the NASA/ACCESS program that seeks to leverage and enhance higher TRL (technology readiness level) informatics technologies to address key data interoperability and integration issues associated with in situ ocean data, including the dearth of effective web-based visualization solutions. Existing web tools for the visualization of key in situ data types - point, profile, trajectory series - are limited in their support for integrated, dynamic and coordinated views of the spatiotemporal characteristics of the data. Via the extension of the JPL Common Mapping Client (CMC) software framework, OIIP seeks to provide improved visualization support for oceanographic in situ data sets. More specifically, this entails improved representation of both horizontal and vertical aspects of these data, which inherently are depth resolved and time referenced, as well as the visual synchronization with relevant remotely-sensed gridded data products, such as sea surface temperature and salinity. Electronic tagging datasets, which are a focal use case for OIIP, provide a representative, if somewhat complex, visualization challenge in this regard. Critical to the achievement of these development objectives has been compilation of a well-rounded set of visualization use cases and requirements based on a series of end-user consultations aimed at understanding their satellite-in situ visualization needs. Here we summarize progress on aspects of the technical work and our approach.

  19. Explaining academic progress via combining concepts of integration theory and rational choice theory

    NARCIS (Netherlands)

    Beekhoven, S.; Jong, U. de; Hout, J.F.M.J. van

    2002-01-01

    In this article, elements of rational choice theory and integration theory are compared on the basis of their explanatory power to explain variance in academic progress. It is argued that both theoretical concepts could be combined. Furthermore the distinction between social and academic integration

  20. The relationship between better-eye and integrated visual field mean deviation and visual disability.

    Science.gov (United States)

    Arora, Karun S; Boland, Michael V; Friedman, David S; Jefferys, Joan L; West, Sheila K; Ramulu, Pradeep Y

    2013-12-01

    To determine the extent of difference between better-eye visual field (VF) mean deviation (MD) and integrated VF (IVF) MD among Salisbury Eye Evaluation (SEE) subjects and a larger group of glaucoma clinic subjects and to assess how those measures relate to objective and subjective measures of ability/performance in SEE subjects. Retrospective analysis of population- and clinic-based samples of adults. A total of 490 SEE and 7053 glaucoma clinic subjects with VF loss (MD ≤-3 decibels [dB] in at least 1 eye). Visual field testing was performed in each eye, and IVF MD was calculated. Differences between better-eye and IVF MD were calculated for SEE and clinic-based subjects. In SEE subjects with VF loss, models were constructed to compare the relative impact of better-eye and IVF MD on driving habits, mobility, self-reported vision-related function, and reading speed. Difference between better-eye and IVF MD and relationship of better-eye and IVF MD with performance measures. The median difference between better-eye and IVF MD was 0.41 dB (interquartile range [IQR], -0.21 to 1.04 dB) and 0.72 dB (IQR, 0.04-1.45 dB) for SEE subjects and clinic-based patients with glaucoma, respectively, with differences of ≥ 2 dB between the 2 MDs observed in 9% and 18% of the groups, respectively. Among SEE subjects with VF loss, both MDs demonstrated similar associations with multiple ability and performance metrics as judged by the presence/absence of a statistically significant association between the MD and the metric, the magnitude of observed associations (odds ratios, rate ratios, or regression coefficients associated with 5-dB decrements in MD), and the extent of variability in the metric explained by the model (R(2)). Similar associations of similar magnitude also were noted for the subgroup of subjects with glaucoma and subjects in whom better-eye and IVF MD differed by ≥ 2 dB. The IVF MD rarely differs from better-eye MD, and similar associations between VF loss and

  1. Exploring the Integration of Data Mining and Data Visualization

    Science.gov (United States)

    Zhang, Yi

    2011-01-01

    Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…

  2. Integration of today's digital state with tomorrow's visual environment

    Science.gov (United States)

    Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott

    1996-03-01

    New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.

  3. Pathview Web: user friendly pathway visualization and data integration.

    Science.gov (United States)

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Turkish Preschool Teachers' Beliefs on Integrated Curriculum: Integration of Visual Arts with Other Activities

    Science.gov (United States)

    Ozturk, Elif; Erden, Feyza Tantekin

    2011-01-01

    This study investigates preschool teachers' beliefs about integrated curriculum and, more specifically, their beliefs about integration of visual arts with other activities. The participants of this study consisted of 255 female preschool teachers who are employed in preschools in Ankara, Turkey. For the study, teachers were asked to complete…

  6. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  7. Treelink: data integration, clustering and visualization of phylogenetic trees.

    Science.gov (United States)

    Allende, Christian; Sohn, Erik; Little, Cedric

    2015-12-29

    Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .

  8. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.

    Science.gov (United States)

    Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P

    2013-03-01

    Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.

  9. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  10. Visual Data Analysis as an Integral Part of Environmental Management

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Bethel, E. Wes; Horsman, Jennifer L.; Hubbard, Susan S.; Krishnan, Harinarayan; Romosan,, Alexandru; Keating, Elizabeth H.; Monroe, Laura; Strelitz, Richard; Moore, Phil; Taylor, Glenn; Torkian, Ben; Johnson, Timothy C.; Gorton, Ian

    2012-10-01

    The U.S. Department of Energy's (DOE) Office of Environmental Management (DOE/EM) currently supports an effort to understand and predict the fate of nuclear contaminants and their transport in natural and engineered systems. Geologists, hydrologists, physicists and computer scientists are working together to create models of existing nuclear waste sites, to simulate their behavior and to extrapolate it into the future. We use visualization as an integral part in each step of this process. In the first step, visualization is used to verify model setup and to estimate critical parameters. High-performance computing simulations of contaminant transport produces massive amounts of data, which is then analyzed using visualization software specifically designed for parallel processing of large amounts of structured and unstructured data. Finally, simulation results are validated by comparing simulation results to measured current and historical field data. We describe in this article how visual analysis is used as an integral part of the decision-making process in the planning of ongoing and future treatment options for the contaminated nuclear waste sites. Lessons learned from visually analyzing our large-scale simulation runs will also have an impact on deciding on treatment measures for other contaminated sites.

  11. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    Science.gov (United States)

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  12. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  13. Integrated Visualization Environment for Science Mission Modeling, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work will provide NASA with an integrated visualization environment providing greater insight and a more intuitive representation of large technical...

  14. Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit

    Science.gov (United States)

    Chiu, Jennifer L.; Linn, Marcia C.

    2014-01-01

    This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…

  15. Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?

    Science.gov (United States)

    Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde

    2012-01-01

    In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…

  16. Value Chain Envy: Explaining New Entry and Vertical Integration in Popular Music

    NARCIS (Netherlands)

    Mol, J.M.; Wijnberg, N.M.; Carroll, C.

    2005-01-01

    The concepts of value creation, value capture, and value protection are employed to explain new entry and vertical integration. It is posited that if, at one stage of the value system, the share of value captured is disproportionally higher than the share of value created, value chain envy will

  17. Value chain envy : Explaining new entry and vertical integration in popular music

    NARCIS (Netherlands)

    Mol, J.M.; Wijnberg, N.M.; Carroll, C.

    The concepts of value creation, value capture, and value protection are employed to explain new entry and vertical integration. It is posited that if, at one stage of the value system, the share of value captured is disproportionally higher than the share of value created, value chain envy will

  18. SVIP-N 1.0: An integrated visualization platform for neutronics analysis

    International Nuclear Information System (INIS)

    Luo Yuetong; Long Pengcheng; Wu Guoyong; Zeng Qin; Hu Liqin; Zou Jun

    2010-01-01

    Post-processing is an important part of neutronics analysis, and SVIP-N 1.0 (scientific visualization integrated platform for neutronics analysis) is designed to ease post-processing of neutronics analysis through visualization technologies. Main capabilities of SVIP-N 1.0 include: (1) ability of manage neutronics analysis result; (2) ability to preprocess neutronics analysis result; (3) ability to visualization neutronics analysis result data in different way. The paper describes the system architecture and main features of SVIP-N, some advanced visualization used in SVIP-N 1.0 and some preliminary applications, such as ITER.

  19. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  20. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  1. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    Science.gov (United States)

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  2. DEVELOPMENT OF FINE MOTOR COORDINATION AND VISUAL-MOTOR INTEGRATION IN PRESCHOOL CHILDREN

    OpenAIRE

    MEMISEVIC Haris; HADZIC Selmir

    2015-01-01

    Fine motor skills are prerequisite for many everyday activities and they are a good predictor of a child's later academic outcome. The goal of the present study was to assess the effects of age on the development of fine motor coordination and visual-motor integration in preschool children. The sample for this study consisted of 276 preschool children from Canton Sara­jevo, Bosnia and Herzegovina. We assessed children's motor skills with Beery Visual Motor Integration Test and Lafayette Pegbo...

  3. A Motor-Skills Programme to Enhance Visual Motor Integration of Selected Pre-School Learners

    Science.gov (United States)

    Africa, Eileen K.; van Deventer, Karel J.

    2017-01-01

    Pre-schoolers are in a window period for motor skill development. Visual-motor integration (VMI) is the foundation for academic and sport skills. Therefore, it must develop before formal schooling. This study attempted to improve VMI skills. VMI skills were measured with the "Beery-Buktenica developmental test of visual-motor integration 6th…

  4. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    Science.gov (United States)

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  5. MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.

    Science.gov (United States)

    Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk

    2016-03-18

    Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .

  6. Enhancing creative problem solving in an integrated visual art and geometry program: A pilot study

    NARCIS (Netherlands)

    Schoevers, E.M.; Kroesbergen, E.H.; Pitta-Pantazi, D.

    2017-01-01

    This article describes a new pedagogical method, an integrated visual art and geometry program, which has the aim to increase primary school students' creative problem solving and geometrical ability. This paper presents the rationale for integrating visual art and geometry education. Furthermore

  7. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  8. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  9. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  10. An attempt to explain the uranium 238 effective capture integral discrepancy

    International Nuclear Information System (INIS)

    Tellier, Henry; Grandotto-Biettoli, Marc; Vanuxeem, Jacqueline

    1979-02-01

    Up to now, there was a discrepancy between the computed value and the measured value of the uranium 238 effective capture integral. The former has been always greater than the latter. For this reason, the reactor physicists have used an adjustment of the computed value. Nowadays the accuracy of the cross sections knowledge is increased and the reactors computation codes are almost exact. Such an adjustment is no more justified. Recently several new measurements of the resonance parameters were carried out and the use of a multilevel formalism was suggested to compute the uranium 238 cross sections. It is shown in this work that the simultaneous use of recent parameters and Reich and Moore formalism explain the discrepancy. For the thermal neutron reactors, two thirds of this discrepancy are explained by the neutron data and the last third by the multilevel formalism [fr

  11. Visual-Motor Integration in Children with Prader-Willi Syndrome

    Science.gov (United States)

    Lo, S. T.; Collin, P. J. L.; Hokken-Koelega, A. C. S.

    2015-01-01

    Background: Prader-Willi syndrome (PWS) is characterised by hypotonia, hypogonadism, short stature, obesity, behavioural problems, intellectual disability, and delay in language, social and motor development. There is very limited knowledge about visual-motor integration in children with PWS. Method: Seventy-three children with PWS aged 7-17 years…

  12. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    Science.gov (United States)

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Visualization and Integrated Data Mining of Disparate Information

    Energy Technology Data Exchange (ETDEWEB)

    Saffer, Jeffrey D.(OMNIVIZ, INC); Albright, Cory L.(BATTELLE (PACIFIC NW LAB)); Calapristi, Augustin J.(BATTELLE (PACIFIC NW LAB)); Chen, Guang (OMNIVIZ, INC); Crow, Vernon L.(BATTELLE (PACIFIC NW LAB)); Decker, Scott D.(BATTELLE (PACIFIC NW LAB)); Groch, Kevin M.(BATTELLE (PACIFIC NW LAB)); Havre, Susan L.(BATTELLE (PACIFIC NW LAB)); Malard, Joel (BATTELLE (PACIFIC NW LAB)); Martin, Tonya J.(BATTELLE (PACIFIC NW LAB)); Miller, Nancy E.(BATTELLE (PACIFIC NW LAB)); Monroe, Philip J.(OMNIVIZ, INC); Nowell, Lucy T.(BATTELLE (PACIFIC NW LAB)); Payne, Deborah A.(BATTELLE (PACIFIC NW LAB)); Reyes Spindola, Jorge F.(BATTELLE (PACIFIC NW LAB)); Scarberry, Randall E.(OMNIVIZ, INC); Sofia, Heidi J.(BATTELLE (PACIFIC NW LAB)); Stillwell, Lisa C.(OMNIVIZ, INC); Thomas, Gregory S.(BATTELLE (PACIFIC NW LAB)); Thurston, Sarah J.(OMNIVIZ, INC); Williams, Leigh K.(BATTELLE (PACIFIC NW LAB)); Zabriskie, Sean J.(OMNIVIZ, INC); MG Hicks

    2001-05-11

    The volumes and diversity of information in the discovery, development, and business processes within the chemical and life sciences industries require new approaches for analysis. Traditional list- or spreadsheet-based methods are easily overwhelmed by large amounts of data. Furthermore, generating strong hypotheses and, just as importantly, ruling out weak ones, requires integration across different experimental and informational sources. We have developed a framework for this integration, including common conceptual data models for multiple data types and linked visualizations that provide an overview of the entire data set, a measure of how each data record is related to every other record, and an assessment of the associations within the data set.

  14. Integration of visual and inertial cues in the perception of angular self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Soyka, F.; Barnett-Cowan, M.; Bülthoff, H.H.; Groen, E.L.; Werkhoven, P.J.

    2013-01-01

    The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when

  15. Parallel development of contour integration and visual contrast sensitivity at low spatial frequencies

    DEFF Research Database (Denmark)

    Benedek, Krisztina; Janáky, Márta; Braunitzer, Gábor

    2010-01-01

    It has been suggested that visual contrast sensitivity and contour integration functions exhibit a late maturation during adolescence. However, the relationship between these functions has not been investigated. The aim of this study was to assess the development of visual contrast sensitivity...

  16. The effect of integration masking on visual processing in perceptual categorization.

    Science.gov (United States)

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    Science.gov (United States)

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. (c) 2015 APA, all rights reserved).

  18. DEVELOPMENT OF FINE MOTOR COORDINATION AND VISUAL-MOTOR INTEGRATION IN PRESCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    Haris MEMISEVIC

    2013-03-01

    Full Text Available Fine motor skills are prerequisite for many everyday activities and they are a good predictor of a child's later academic outcome. The goal of the present study was to assess the effects of age on the development of fine motor coordination and visual-motor integration in preschool children. The sample for this study consisted of 276 preschool children from Canton Sara­jevo, Bosnia and Herzegovina. We assessed children's motor skills with Beery Visual Motor Integration Test and Lafayette Pegboard Test. Data were analyzed with one-way ANOVA, followed by planned com­parisons between the age groups. We also performed a regression analysis to assess the influence of age and motor coordination on visual-motor integration. The results showed that age has a great effect on the development of fine motor skills. Furthermore, the results indicated that there are possible sensitive periods at preschool age in which the development of fine motor skills is accelerated. Early intervention specialists should make a thorough evaluations of fine motor skills in preschool children and make motor (rehabilitation programs for children at risk of fine motor delays.

  19. Integrated visualization of simulation results and experimental devices in virtual-reality space

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi

    2011-01-01

    We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)

  20. An Investigation of Visual Contour Integration Ability in Relation to Writing Performance in Primary School Students

    Science.gov (United States)

    Li-Tsang, Cecilia W. P.; Wong, Agnes S. K.; Chan, Jackson Y.; Lee, Amos Y. T.; Lam, Miko C. Y.; Wong, C. W.; Lu, Zhonglin

    2012-01-01

    A previous study found a visual deficit in contour integration in English readers with dyslexia (Simmers & Bex, 2001). Visual contour integration may play an even more significant role in Chinese handwriting particularly due to its logographic presentation (Lam, Au, Leung, & Li-Tsang, 2011). The current study examined the relationship…

  1. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  2. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  3. Visual integration enhances associative memory equally for young and older adults without reducing hippocampal encoding activation.

    Science.gov (United States)

    Memel, Molly; Ryan, Lee

    2017-06-01

    The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was

  4. NMDA receptor antagonist ketamine impairs feature integration in visual perception

    NARCIS (Netherlands)

    Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F

    2013-01-01

    Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground

  5. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  6. Visual Cycle Modulation as an Approach toward Preservation of Retinal Integrity

    OpenAIRE

    Bavik, Claes; Henry, Susan Hayes; Zhang, Yan; Mitts, Kyoko; McGinn, Tim; Budzynski, Ewa; Pashko, Andriy; Lieu, Kuo Lee; Zhong, Sheng; Blumberg, Bruce; Kuksa, Vladimir; Orme, Mark; Scott, Ian; Fawzi, Ahmad; Kubota, Ryo

    2015-01-01

    © 2015 Bavik et al. Increased exposure to blue or visible light, fluctuations in oxygen tension, and the excessive accumulation of toxic retinoid byproducts places a tremendous amount of stress on the retina. Reduction of visual chromophore biosynthesis may be an effective method to reduce the impact of these stressors and preserve retinal integrity. A class of non-retinoid, small molecule compounds that target key proteins of the visual cycle have been developed. The first candidate in this ...

  7. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    Science.gov (United States)

    Xu, C.; Li, S.; Wang, D.; Xie, Q.

    2014-02-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

  8. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform a......) that perceptual and memorial processes can be dissociated on both functional and anatomical grounds. No evidence was obtained for the involvement of the parietal lobes in the integration of single objects....

  9. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  10. The Effect of a Computerized Visual Perception and Visual-Motor Integration Training Program on Improving Chinese Handwriting of Children with Handwriting Difficulties

    Science.gov (United States)

    Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.

    2010-01-01

    This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…

  11. Effects of a Memory and Visual-Motor Integration Program for Older Adults Based on Self-Efficacy Theory.

    Science.gov (United States)

    Kim, Eun Hwi; Suh, Soon Rim

    2017-06-01

    This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science

  12. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    Science.gov (United States)

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  13. Object integration requires attention: visual search for Kanizsa figures in parietal extinction

    OpenAIRE

    Gögler, N.; Finke, K.; Keller, I.; Muller, Hermann J.; Conci, M.

    2016-01-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective att...

  14. Integrating mechanisms of visual guidance in naturalistic language production.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  15. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,' ' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  16. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 4 : use of knowledge integrated visual analytics system in supporting bridge management.

    Science.gov (United States)

    2009-12-01

    The goals of integration should be: Supporting domain oriented data analysis through the use of : knowledge augmented visual analytics system. In this project, we focus on: : Providing interactive data exploration for bridge managements. : ...

  17. Integrative real-time geographic visualization of energy resources

    International Nuclear Information System (INIS)

    Sorokine, A.; Shankar, M.; Stovall, J.; Bhaduri, B.; King, T.; Fernandez, S.; Datar, N.; Omitaomu, O.

    2009-01-01

    'Full text:' Several models forecast that climatic changes will increase the frequency of disastrous events like droughts, hurricanes, and snow storms. Responding to these events and also to power outages caused by system errors such as the 2003 North American blackout require an interconnect-wide real-time monitoring system for various energy resources. Such a system should be capable of providing situational awareness to its users in the government and energy utilities by dynamically visualizing the status of the elements of the energy grid infrastructure and supply chain in geographic contexts. We demonstrate an approach that relies on Google Earth and similar standard-based platforms as client-side geographic viewers with a data-dependent server component. The users of the system can view status information in spatial and temporal contexts. These data can be integrated with a wide range of geographic sources including all standard Google Earth layers and a large number of energy and environmental data feeds. In addition, we show a real-time spatio-temporal data sharing capability across the users of the system, novel methods for visualizing dynamic network data, and a fine-grain access to very large multi-resolution geographic datasets for faster delivery of the data. The system can be extended to integrate contingency analysis results and other grid models to assess recovery and repair scenarios in the case of major disruption. (author)

  18. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    International Nuclear Information System (INIS)

    Xu, C; Xie, Q; Li, S; Wang, D

    2014-01-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products – collected through research groups, monitoring stations and observation cruises – and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment

  19. Higher integrity of the motor and visual pathways in long-term video game players.

    Science.gov (United States)

    Zhang, Yang; Du, Guijin; Yang, Yongxin; Qin, Wen; Li, Xiaodong; Zhang, Quan

    2015-01-01

    Long term video game players (VGPs) exhibit superior visual and motor skills compared with non-video game control subjects (NVGCs). However, the neural basis underlying the enhanced behavioral performance remains largely unknown. To clarify this issue, the present study compared the whiter matter integrity within the corticospinal tracts (CST), the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus (ILF), and the inferior fronto-occipital fasciculus (IFOF) between the VGPs and the NVGCs using diffusion tensor imaging. Compared with the NVGCs, voxel-wise comparisons revealed significantly higher fractional anisotropy (FA) values in some regions within the left CST, left SLF, bilateral ILF, and IFOF in VGPs. Furthermore, higher FA values in the left CST at the level of cerebral peduncle predicted a faster response in visual attention tasks. These results suggest that higher white matter integrity in the motor and higher-tier visual pathways is associated with long-term video game playing, which may contribute to the understanding on how video game play influences motor and visual performance.

  20. Visual Cycle Modulation as an Approach toward Preservation of Retinal Integrity.

    Directory of Open Access Journals (Sweden)

    Claes Bavik

    Full Text Available Increased exposure to blue or visible light, fluctuations in oxygen tension, and the excessive accumulation of toxic retinoid byproducts places a tremendous amount of stress on the retina. Reduction of visual chromophore biosynthesis may be an effective method to reduce the impact of these stressors and preserve retinal integrity. A class of non-retinoid, small molecule compounds that target key proteins of the visual cycle have been developed. The first candidate in this class of compounds, referred to as visual cycle modulators, is emixustat hydrochloride (emixustat. Here, we describe the effects of emixustat, an inhibitor of the visual cycle isomerase (RPE65, on visual cycle function and preservation of retinal integrity in animal models. Emixustat potently inhibited isomerase activity in vitro (IC50 = 4.4 nM and was found to reduce the production of visual chromophore (11-cis retinal in wild-type mice following a single oral dose (ED50 = 0.18 mg/kg. Measure of drug effect on the retina by electroretinography revealed a dose-dependent slowing of rod photoreceptor recovery (ED50 = 0.21 mg/kg that was consistent with the pattern of visual chromophore reduction. In albino mice, emixustat was shown to be effective in preventing photoreceptor cell death caused by intense light exposure. Pre-treatment with a single dose of emixustat (0.3 mg/kg provided a ~50% protective effect against light-induced photoreceptor cell loss, while higher doses (1-3 mg/kg were nearly 100% effective. In Abca4-/- mice, an animal model of excessive lipofuscin and retinoid toxin (A2E accumulation, chronic (3 month emixustat treatment markedly reduced lipofuscin autofluorescence and reduced A2E levels by ~60% (ED50 = 0.47 mg/kg. Finally, in the retinopathy of prematurity rodent model, treatment with emixustat during the period of ischemia and reperfusion injury produced a ~30% reduction in retinal neovascularization (ED50 = 0.46mg/kg. These data demonstrate the ability of

  1. Comparison of Syllabi and Inclusion of Recommendations for Interdisciplinary Integration of Visual Arts Contents

    Directory of Open Access Journals (Sweden)

    Eda Birsa

    2017-09-01

    Full Text Available We applied qualitative analysis to the syllabi of all subjects from the 1st up to the 5th grade of basic school in Slovenia in order to find out in what ways they contain recommendations for interdisciplinary integration. We classified them into three categories: references to subjects, implicit references, and explicit references. The classification into these categories has shown that certain concepts foreseen for integration with visual arts education in individual subjects for a certain grade or for a particular educational cycle cannot be found in the visual arts syllabus.

  2. Real-Time Lane Detection on Suburban Streets Using Visual Cue Integration

    Directory of Open Access Journals (Sweden)

    Shehan Fernando

    2014-04-01

    Full Text Available The detection of lane boundaries on suburban streets using images obtained from video constitutes a challenging task. This is mainly due to the difficulties associated with estimating the complex geometric structure of lane boundaries, the quality of lane markings as a result of wear, occlusions by traffic, and shadows caused by road-side trees and structures. Most of the existing techniques for lane boundary detection employ a single visual cue and will only work under certain conditions and where there are clear lane markings. Also, better results are achieved when there are no other on-road objects present. This paper extends our previous work and discusses a novel lane boundary detection algorithm specifically addressing the abovementioned issues through the integration of two visual cues. The first visual cue is based on stripe-like features found on lane lines extracted using a two-dimensional symmetric Gabor filter. The second visual cue is based on a texture characteristic determined using the entropy measure of the predefined neighbourhood around a lane boundary line. The visual cues are then integrated using a rule-based classifier which incorporates a modified sequential covering algorithm to improve robustness. To separate lane boundary lines from other similar features, a road mask is generated using road chromaticity values estimated from CIE L*a*b* colour transformation. Extraneous points around lane boundary lines are then removed by an outlier removal procedure based on studentized residuals. The lane boundary lines are then modelled with Bezier spline curves. To validate the algorithm, extensive experimental evaluation was carried out on suburban streets and the results are presented.

  3. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis.

    Science.gov (United States)

    Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M

    2016-11-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  5. Property Integration: Componentless Design Techniques and Visualization Tools

    DEFF Research Database (Denmark)

    El-Halwagi, Mahmoud M; Glasgow, I.M.; Eden, Mario Richard

    2004-01-01

    integration is defined as a functionality-based, holistic approach to the allocation and manipulation of streams and processing units, which is based on tracking, adjusting, assigning, and matching functionalities throughout the process. Revised lever arm rules are devised to allow optimal allocation while...... maintaining intra- and interstream conservation of the property-based clusters. The property integration problem is mapped into the cluster domain. This dual problem is solved in terms of clusters and then mapped to the primal problem in the property domain. Several new rules are derived for graphical...... techniques. Particularly, systematic rules and visualization techniques for the identification of optimal mixing of streams and their allocation to units. Furthermore, a derivation of the correspondence between clustering arms and fractional contribution of streams is presented. This correspondence...

  6. Visual Literacy: Implications for the Production of Children's Television Programs.

    Science.gov (United States)

    Amey, L. J.

    Visual literacy, the integration of seeing with other cognitive processes, is an essential tool of learning. To explain the relationship between the perceiver and the perceived, three types of theories can be brought to bear: introverted; extroverted; and transactional. Franklin Fearing, George Herbert Mead, Martin Buber, and other theorists have…

  7. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    Science.gov (United States)

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Visual perception is dependent on visuospatial working memory and thus on the posterior parietal cortex.

    Science.gov (United States)

    Pisella, Laure

    2017-06-01

    Visual perception involves complex and active processes. We will start by explaining why visual perception is dependent on visuospatial working memory, especially the spatiotemporal integration of the perceived elements through the ocular exploration of visual scenes. Then we will present neuropsychology, transcranial magnetic stimulation and neuroimaging data yielding information on the specific role of the posterior parietal cortex of the right hemisphere in visuospatial working memory. Within the posterior parietal cortex, neuropsychology data also suggest that there might be dissociated neural substrates for deployment of attention (superior parietal lobules) and spatiotemporal integration (right inferior parietal lobule). Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  9. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  10. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  11. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    Science.gov (United States)

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. NMDA receptor antagonist ketamine impairs feature integration in visual perception.

    Science.gov (United States)

    Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F

    2013-01-01

    Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.

  13. NMDA receptor antagonist ketamine impairs feature integration in visual perception.

    Directory of Open Access Journals (Sweden)

    Julia D I Meuwese

    Full Text Available Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.

  14. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    Science.gov (United States)

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  15. Superior haptic-to-visual shape matching in autism spectrum disorders.

    Science.gov (United States)

    Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru

    2012-04-01

    A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  17. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    Science.gov (United States)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  18. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    Science.gov (United States)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  19. Integration and Visualization of Epigenome and Mobilome Data in Crops

    OpenAIRE

    Robakowska Hyzorek, Dagmara; Mirouze, Marie; Larmande, Pierre

    2016-01-01

    International audience; In the coming years, the study of the interaction between the epigenome and the mobilome is likely to give insights on the role of TEs on genome stability and evolution. In the present project we have created tools to collect epigenetic datasets from different laboratories and databases and translate them to a standard format to be integrated, analyzed and finally visualized.

  20. From Visual Exploration to Storytelling and Back Again.

    Science.gov (United States)

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  1. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    Science.gov (United States)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  2. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  3. A framework for interactive visual analysis of heterogeneous marine data in an integrated problem solving environment

    Science.gov (United States)

    Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei

    2017-07-01

    This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.

  4. Deficit in visual temporal integration in autism spectrum disorders.

    Science.gov (United States)

    Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru

    2010-04-07

    Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.

  5. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    Science.gov (United States)

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system

  6. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  7. Can Cultural Behavior Have a Negative Impact on the Development of Visual Integration Pathways?

    Science.gov (United States)

    Pretorius, E.; Naude, H.; van Vuuren, C. J.

    2002-01-01

    Contends that cultural practices such as carrying the baby on the mother's back for prolonged periods can impact negatively on development of visual integration during the sensorimotor stage pathways by preventing adequate or enough crawling. Maintains that crawling is essential for cross- modality integration and that higher mental functions may…

  8. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  9. An Integrated Biomechanical Model for Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2012-01-01

    When gravitational unloading occurs upon entry to space, astronauts experience a major shift in the distribution of their bodily fluids, with a net headward movement. Measurements have shown that intraocular pressure spikes, and there is a strong suspicion that intracranial pressure also rises. Some astronauts in both short- and long-duration spaceflight develop visual acuity changes, which may or may not reverse upon return to earth gravity. To date, of the 36 U.S. astronauts who have participated in long-duration space missions on the International Space Station, 15 crew members have developed minor to severe visual decrements and anatomical changes. These ophthalmic changes include hyperopic shift, optic nerve distension, optic disc edema, globe flattening, choroidal folds, and elevated cerebrospinal fluid pressure. In order to understand the physical mechanisms behind these phenomena, NASA is developing an integrated model that appropriately captures whole-body fluids transport through lumped-parameter models for the cerebrospinal and cardiovascular systems. This data feeds into a finite element model for the ocular globe and retrobulbar subarachnoid space through time-dependent boundary conditions. Although tissue models and finite element representations of the corneo-scleral shell, retina, choroid and optic nerve head have been integrated to study pathological conditions such as glaucoma, the retrobulbar subarachnoid space behind the eye has received much less attention. This presentation will describe the development and scientific foundation of our holistic model.

  10. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  11. A Visual Interface Diagram For Mapping Functions In Integrated Products

    DEFF Research Database (Denmark)

    Ingerslev, Mattias; Oliver Jespersen, Mikkel; Göhler, Simon Moritz

    2015-01-01

    In product development there is a recognized tendency towards increased functionality for each new product generation. This leads to more integrated and complex products, with the risk of development delays and quality issues as a consequence of lacking overview and transparency. The work described...... of visualizing relations between parts and functions in highly integrated mechanical products. The result is an interface diagram that supports design teams in communication, decision making and design management. The diagram gives the designer an overview of the couplings and dependencies within a product...... in this article has been conducted in collaboration with Novo Nordisk on the insulin injection device FlexTouch® as case product. The FlexTouch® reflects the characteristics of an integrated product with several functions shared between a relatively low number of parts. In this article we present a novel way...

  12. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  13. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  14. Applications in Foreign Currency Money Changer Cv.xyz Using Microsoft Visual Basic 6.0

    OpenAIRE

    Fanny Ramadhan; Hariyanto.,SKom., MMSI Hariyanto.,SKom., MMSI

    2010-01-01

    This explains the scientific writing about the design of application programs forforeign currency transactions by using Visual Basic 6.0 programming languagecoupled with the flow diagram (flowchart).In scientific writing database is used also by using Microsoft software Accsess 2003which have been integrated in Visual Basic 6.0 program itself. Consists of four tablesof Currency, Customer, Transaction, Employee.In the end application program for foreign currency transactions will be applied to...

  15. Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.

    Science.gov (United States)

    Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce

    2017-10-01

    Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.

  16. Cultivating Common Ground: Integrating Standards-Based Visual Arts, Math and Literacy in High-Poverty Urban Classrooms

    Science.gov (United States)

    Cunnington, Marisol; Kantrowitz, Andrea; Harnett, Susanne; Hill-Ries, Aline

    2014-01-01

    The "Framing Student Success: Connecting Rigorous Visual Arts, Math and Literacy Learning" experimental demonstration project was designed to develop and test an instructional program integrating high-quality, standards-based instruction in the visual arts, math, and literacy. Developed and implemented by arts-in-education organization…

  17. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    Science.gov (United States)

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  18. Vision and visual information processing in cubozoans

    DEFF Research Database (Denmark)

    Bielecki, Jan

    relationship between acuity and light sensitivity. Animals have evolved a wide variety of solutions to this problem such as folded membranes, to have a larger receptive surfaces, and lenses, to focus light onto the receptive membranes. On the neural capacity side, complex eyes demand huge processing network...... animals in a wide range of behaviours. It is intuitive that a complex eye is energetically very costly, not only in components but also in neural involvement. The increasing behavioural demand added pressure on design specifications and eye evolution is considered an optimization of the inverse...... fit their need. Visual neuroethology integrates optics, sensory equipment, neural network and motor output to explain how animals can perform behaviour in response to a specific visual stimulus. In this doctoral thesis, I will elucidate the individual steps in a visual neuroethological pathway...

  19. Four-dimensional microscope- integrated optical coherence tomography to enhance visualization in glaucoma surgeries.

    Science.gov (United States)

    Pasricha, Neel Dave; Bhullar, Paramjit Kaur; Shieh, Christine; Viehland, Christian; Carrasco-Zevallos, Oscar Mijail; Keller, Brenton; Izatt, Joseph Adam; Toth, Cynthia Ann; Challa, Pratap; Kuo, Anthony Nanlin

    2017-01-01

    We report the first use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT) capable of live four-dimensional (4D) (three-dimensional across time) imaging intraoperatively to directly visualize tube shunt placement and trabeculectomy surgeries in two patients with severe open-angle glaucoma and elevated intraocular pressure (IOP) that was not adequately managed by medical intervention or prior surgery. We performed tube shunt placement and trabeculectomy surgery and used SS-MIOCT to visualize and record surgical steps that benefitted from the enhanced visualization. In the case of tube shunt placement, SS-MIOCT successfully visualized the scleral tunneling, tube shunt positioning in the anterior chamber, and tube shunt suturing. For the trabeculectomy, SS-MIOCT successfully visualized the scleral flap creation, sclerotomy, and iridectomy. Postoperatively, both patients did well, with IOPs decreasing to the target goal. We found the benefit of SS-MIOCT was greatest in surgical steps requiring depth-based assessments. This technology has the potential to improve clinical outcomes.

  20. A double-integration hypothesis to explain ocean ecosystem response to climate forcing

    Science.gov (United States)

    Di Lorenzo, Emanuele; Ohman, Mark D.

    2013-01-01

    Long-term time series of marine ecological indicators often are characterized by large-amplitude state transitions that can persist for decades. Understanding the significance of these variations depends critically on the underlying hypotheses characterizing expected natural variability. Using a linear autoregressive model in combination with long-term zooplankton observations off the California coast, we show that cumulative integrations of white-noise atmospheric forcing can generate marine population responses that are characterized by strong transitions and prolonged apparent state changes. This model provides a baseline hypothesis for explaining ecosystem variability and for interpreting the significance of abrupt responses and climate change signatures in marine ecosystems. PMID:23341628

  1. An Integrated Model to Explain How Corporate Social Responsibility Affects Corporate Financial Performance

    Directory of Open Access Journals (Sweden)

    Chin-Shien Lin

    2015-06-01

    Full Text Available The effect of corporate social responsibility (CSR on financial performance has important implications for enterprises, communities, and countries, and the significance of this issue cannot be ignored. Therefore, this paper proposes an integrated model to explain the influence of CSR on financial performance with intellectual capital as a mediator and industry type as a moderator. Empirical results indicate that intellectual capital mediates the relationship between CSR and financial performance, and industry type moderates the direct influence of CSR on financial performance. Such results have critical implications for both academia and practice.

  2. Four-dimensional Microscope-Integrated Optical Coherence Tomography to Visualize Suture Depth in Strabismus Surgery.

    Science.gov (United States)

    Pasricha, Neel D; Bhullar, Paramjit K; Shieh, Christine; Carrasco-Zevallos, Oscar M; Keller, Brenton; Izatt, Joseph A; Toth, Cynthia A; Freedman, Sharon F; Kuo, Anthony N

    2017-02-14

    The authors report the use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT), capable of live four-dimensional (three-dimensional across time) intraoperative imaging, to directly visualize suture depth during lateral rectus resection. Key surgical steps visualized in this report included needle depth during partial and full-thickness muscle passes along with scleral passes. [J Pediatr Ophthalmol Strabismus. 2017;54:e1-e5.]. Copyright 2017, SLACK Incorporated.

  3. Can responses to basic non-numerical visual features explain neural numerosity responses?

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Perceptual stimulus-A Bayesian-based integration of multi-visual-cue approach and its application

    Institute of Scientific and Technical Information of China (English)

    XUE JianRu; ZHENG NanNing; ZHONG XiaoPin; PING LinJiang

    2008-01-01

    With the view that visual cue could be taken as a kind of stimulus, the study of the mechanism in the visual perception process by using visual cues in their probabilistic representation eventually leads to a class of statistical integration of multiple visual cues (IMVC) methods which have been applied widely in perceptual grouping, video analysis, and other basic problems in computer vision. In this paper, a survey on the basic ideas and recent advances of IMVC methods is presented, and much focus is on the models and algorithms of IMVC for video analysis within the framework of Bayesian estimation. Furthermore, two typical problems in video analysis, robust visual tracking and "switching problem" in multi-target tracking (MTT) are taken as test beds to verify a series of Bayesian-based IMVC methods proposed by the authors. Furthermore, the relations between the statistical IMVC and the visual per-ception process, as well as potential future research work for IMVC, are discussed.

  5. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  6. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  7. Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor

    Science.gov (United States)

    Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn

    2009-01-01

    The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion

  8. Beta, but not gamma, band oscillations index visual form-motion integration.

    Directory of Open Access Journals (Sweden)

    Charles Aissani

    Full Text Available Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou's figure (bound percept or as pairs of bars oscillating independently along cardinal axes (unbound percept. We found that beta (15-25 Hz, but not gamma (55-85 Hz oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.

  9. An integrated theory of attention and decision making in visual signal detection.

    Science.gov (United States)

    Smith, Philip L; Ratcliff, Roger

    2009-04-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions. (c) 2009 APA, all rights reserved

  10. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Causes of blindness and visual impairment among students in integrated schools for the blind in Nepal.

    Science.gov (United States)

    Shrestha, Jyoti Baba; Gnyawali, Subodh; Upadhyay, Madan Prasad

    2012-12-01

    To identify the causes of blindness and visual impairment among students in integrated schools for the blind in Nepal. A total of 778 students from all 67 integrated schools for the blind in Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision during the study period of 3 years. Among 831 students enrolled in the schools, 778 (93.6%) participated in the study. Mean age of students examined was 13.7 years, and the male to female ratio was 1.4:1. Among the students examined, 85.9% were blind, 10% had severe visual impairment and 4.1% were visually impaired. The cornea (22.8%) was the most common anatomical site of visual impairment, its most frequent cause being vitamin A deficiency, followed by the retina (18.4%) and lens (17.6%). Hereditary and childhood factors were responsible for visual loss in 27.9% and 22.0% of students, respectively. Etiology could not be determined in 46% of cases. Overall, 40.9% of students had avoidable causes of visual loss. Vision could be improved to a level better than 6/60 in 3.6% of students refracted. More than one third of students were visually impaired for potentially avoidable reasons, indicating lack of eye health awareness and eye care services in the community. The cause of visual impairment remained unknown in a large number of students, which indicates the need for introduction of modern diagnostic tools.

  12. Asymmetric Temporal Integration of Layer 4 and Layer 2/3 Inputs in Visual Cortex

    OpenAIRE

    Hang, Giao B.; Dan, Yang

    2010-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices...

  13. Visualizing Volume to Help Students Understand the Disk Method on Calculus Integral Course

    Science.gov (United States)

    Tasman, F.; Ahmad, D.

    2018-04-01

    Many research shown that students have difficulty in understanding the concepts of integral calculus. Therefore this research is interested in designing a classroom activity integrated with design research method to assist students in understanding the integrals concept especially in calculating the volume of rotary objects using disc method. In order to support student development in understanding integral concepts, this research tries to use realistic mathematical approach by integrating geogebra software. First year university student who takes a calculus course (approximately 30 people) was chosen to implement the classroom activity that has been designed. The results of retrospective analysis show that visualizing volume of rotary objects using geogebra software can assist the student in understanding the disc method as one way of calculating the volume of a rotary object.

  14. Integration of Visual and Proprioceptive Limb Position Information in Human Posterior Parietal, Premotor, and Extrastriate Cortex.

    Science.gov (United States)

    Limanowski, Jakub; Blankenburg, Felix

    2016-03-02

    The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual

  15. Questionnaire-based person trip visualization and its integration to quantitative measurements in Myanmar

    Science.gov (United States)

    Kimijiama, S.; Nagai, M.

    2016-06-01

    With telecommunication development in Myanmar, person trip survey is supposed to shift from conversational questionnaire to GPS survey. Integration of both historical questionnaire data to GPS survey and visualizing them are very important to evaluate chronological trip changes with socio-economic and environmental events. The objectives of this paper are to: (a) visualize questionnaire-based person trip data, (b) compare the errors between questionnaire and GPS data sets with respect to sex and age and (c) assess the trip behaviour in time-series. Totally, 345 individual respondents were selected through random stratification to assess person trip using a questionnaire and GPS survey for each. Conversion of trip information such as a destination from the questionnaires was conducted by using GIS. The results show that errors between the two data sets in the number of trips, total trip distance and total trip duration are 25.5%, 33.2% and 37.2%, respectively. The smaller errors are found among working-age females mainly employed with the project-related activities generated by foreign investment. The trip distant was yearly increased. The study concluded that visualization of questionnaire-based person trip data and integrating them to current quantitative measurements are very useful to explore historical trip changes and understand impacts from socio-economic events.

  16. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    Science.gov (United States)

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Constituents of Music and Visual-Art Related Pleasure - A Critical Integrative Literature Review.

    Science.gov (United States)

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver's internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure

  18. Numerical integration methods and layout improvements in the context of dynamic RNA visualization.

    Science.gov (United States)

    Shabash, Boris; Wiese, Kay C

    2017-05-30

    RNA visualization software tools have traditionally presented a static visualization of RNA molecules with limited ability for users to interact with the resulting image once it is complete. Only a few tools allowed for dynamic structures. One such tool is jViz.RNA. Currently, jViz.RNA employs a unique method for the creation of the RNA molecule layout by mapping the RNA nucleotides into vertexes in a graph, which we call the detailed graph, and then utilizes a Newtonian mechanics inspired system of forces to calculate a layout for the RNA molecule. The work presented here focuses on improvements to jViz.RNA that allow the drawing of RNA secondary structures according to common drawing conventions, as well as dramatic run-time performance improvements. This is done first by presenting an alternative method for mapping the RNA molecule into a graph, which we call the compressed graph, and then employing advanced numerical integration methods for the compressed graph representation. Comparing the compressed graph and detailed graph implementations, we find that the compressed graph produces results more consistent with RNA drawing conventions. However, we also find that employing the compressed graph method requires a more sophisticated initial layout to produce visualizations that would require minimal user interference. Comparing the two numerical integration methods demonstrates the higher stability of the Backward Euler method, and its resulting ability to handle much larger time steps, a high priority feature for any software which entails user interaction. The work in this manuscript presents the preferred use of compressed graphs to detailed ones, as well as the advantages of employing the Backward Euler method over the Forward Euler method. These improvements produce more stable as well as visually aesthetic representations of the RNA secondary structures. The results presented demonstrate that both the compressed graph representation, as well as the Backward

  19. Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

    Science.gov (United States)

    Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo

    2011-01-01

    In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423

  20. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  1. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  2. Integrated visualization of remote sensing data using Google Earth

    Science.gov (United States)

    Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.

    2009-09-01

    The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type

  3. An integrated audio-visual impact tool for wind turbine installations

    International Nuclear Information System (INIS)

    Lymberopoulos, N.; Belessis, M.; Wood, M.; Voutsinas, S.

    1996-01-01

    An integrated software tool was developed for the design of wind parks that takes into account their visual and audio impact. The application is built on a powerful hardware platform and is fully operated through a graphic user interface. The topography, the wind turbines and the daylight conditions are realised digitally. The wind park can be animated in real time and the user can take virtual walks in it while the set-up of the park can be altered interactively. In parallel, the wind speed levels on the terrain, the emitted noise intensity, the annual energy output and the cash flow can be estimated at any stage of the session and prompt the user for rearrangements. The tool has been used to visually simulate existing wind parks in St. Breok, UK and Andros Island, Greece. The results lead to the conclusion that such a tool can assist to the public acceptance and licensing procedures of wind parks. (author)

  4. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    Science.gov (United States)

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Visual Criterion for Understanding the Notion of Convergence if Integrals in One Parameter

    Science.gov (United States)

    Alves, Francisco Regis Vieira

    2014-01-01

    Admittedly, the notion of generalized integrals in one parameter has a fundamental role. En virtue that, in this paper, we discuss and characterize an approach for to promote the visualization of this scientific mathematical concept. We still indicate the possibilities of graphical interpretation of formal properties related to notion of…

  6. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  7. FACILITATING INTEGRATED SPATIO-TEMPORAL VISUALIZATION AND ANALYSIS OF HETEROGENEOUS ARCHAEOLOGICAL AND PALAEOENVIRONMENTAL RESEARCH DATA

    Directory of Open Access Journals (Sweden)

    C. Willmes

    2012-07-01

    Full Text Available In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806, a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database, a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  8. Visual Education

    DEFF Research Database (Denmark)

    Buhl, Mie; Flensborg, Ingelise

    2010-01-01

    The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating the functi......The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating...... to emerge in the interlocutory space of a global visual repertoire and diverse local interpretations. The two perspectives represent challenges for future visual education which require visual competences, not only within the arts but also within the subjects of natural sciences, social sciences, languages...

  9. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy.

    Science.gov (United States)

    Chang, Wen-Chung; Chen, Chin-Sheng; Tai, Hung-Chi; Liu, Chia-Yuan; Chen, Yu-Jen

    2014-01-01

    The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV) in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US) and computed tomography (CT) scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy.

  10. ICT integration in mathematics initial teacher training and its impact on visualization: the case of GeoGebra

    Science.gov (United States)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.

  11. A Visualization Tool for Integrating Research Results at an Underground Mine

    Science.gov (United States)

    Boltz, S.; Macdonald, B. D.; Orr, T.; Johnson, W.; Benton, D. J.

    2016-12-01

    Researchers with the National Institute for Occupational Safety and Health are conducting research at a deep, underground metal mine in Idaho to develop improvements in ground control technologies that reduce the effects of dynamic loading on mine workings, thereby decreasing the risk to miners. This research is multifaceted and includes: photogrammetry, microseismic monitoring, geotechnical instrumentation, and numerical modeling. When managing research involving such a wide range of data, understanding how the data relate to each other and to the mining activity quickly becomes a daunting task. In an effort to combine this diverse research data into a single, easy-to-use system, a three-dimensional visualization tool was developed. The tool was created using the Unity3d video gaming engine and includes the mine development entries, production stopes, important geologic structures, and user-input research data. The tool provides the user with a first-person, interactive experience where they are able to walk through the mine as well as navigate the rock mass surrounding the mine to view and interpret the imported data in the context of the mine and as a function of time. The tool was developed using data from a single mine; however, it is intended to be a generic tool that can be easily extended to other mines. For example, a similar visualization tool is being developed for an underground coal mine in Colorado. The ultimate goal is for NIOSH researchers and mine personnel to be able to use the visualization tool to identify trends that may not otherwise be apparent when viewing the data separately. This presentation highlights the features and capabilities of the mine visualization tool and explains how it may be used to more effectively interpret data and reduce the risk of ground fall hazards to underground miners.

  12. Constituents of Music and Visual-Art Related Pleasure – A Critical Integrative Literature Review

    Directory of Open Access Journals (Sweden)

    Marianne Tiihonen

    2017-07-01

    Full Text Available The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic, twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1 What is the role of the keyword in the research question? (2 Is pleasure considered a result of variation in the perceiver’s internal or external attributes? (3 What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music

  13. Constituents of Music and Visual-Art Related Pleasure – A Critical Integrative Literature Review

    Science.gov (United States)

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver’s internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure

  14. Understanding visual consciousness in autism spectrum disorders.

    Science.gov (United States)

    Yatziv, Tal; Jacobson, Hilla

    2015-01-01

    The paper focuses on the question of what the (visual) perceptual differences are between individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. We argue against the view that autistic subjects have a deficiency in the most basic form of perceptual consciousness-namely, phenomenal consciousness. Instead, we maintain, the perceptual atypicality of individuals with autism is of a more conceptual and cognitive sort-their perceptual experiences share crucial aspects with TD individuals. Our starting point is Ben Shalom's (2005, 2009) three-level processing framework for explaining atypicality in several domains of processing among autistics, which we compare with two other tripartite models of perception-Jackendoff's (1987) and Prinz's (2000, 2005a, 2007) Intermediate Level Hypothesis and Lamme's (2004, 2006, 2010) neural account of consciousness. According to these models, whereas the second level of processing is concerned with viewer-centered visual representations of basic visual properties and incorporates some early forms of integration, the third level is more cognitive and conceptual. We argue that the data suggest that the atypicality in autism is restricted mainly to the third level. More specifically, second-level integration, which is the mark of phenomenal consciousness, is typical, yet third-level integration of perceptual objects and concepts is atypical. Thus, the basic experiences of individuals with autism are likely to be similar to typical subjects' experiences; the main difference lies in the sort of cognitive access the subjects have to their experiences. We conclude by discussing implications of the suggested analysis of experience in autism for conceptions of phenomenal consciousness.

  15. Integrating the Visual Arts Back into the Classroom with Mobile Applications: Teaching beyond the "Click and View" Approach

    Science.gov (United States)

    Katz-Buonincontro, Jen; Foster, Aroutis

    2013-01-01

    Teachers can use mobile applications to integrate the visual arts back into the classroom, but how? This article generates recommendations for selecting and using well-designed mobile applications in the visual arts beyond a "click and view " approach. Using quantitative content analysis, the results show the extent to which a sample of…

  16. VarB Plus: An Integrated Tool for Visualization of Genome Variation Datasets

    KAUST Repository

    Hidayah, Lailatul

    2012-07-01

    Research on genomic sequences has been improving significantly as more advanced technology for sequencing has been developed. This opens enormous opportunities for sequence analysis. Various analytical tools have been built for purposes such as sequence assembly, read alignments, genome browsing, comparative genomics, and visualization. From the visualization perspective, there is an increasing trend towards use of large-scale computation. However, more than power is required to produce an informative image. This is a challenge that we address by providing several ways of representing biological data in order to advance the inference endeavors of biologists. This thesis focuses on visualization of variations found in genomic sequences. We develop several visualization functions and embed them in an existing variation visualization tool as extensions. The tool we improved is named VarB, hence the nomenclature for our enhancement is VarB Plus. To the best of our knowledge, besides VarB, there is no tool that provides the capability of dynamic visualization of genome variation datasets as well as statistical analysis. Dynamic visualization allows users to toggle different parameters on and off and see the results on the fly. The statistical analysis includes Fixation Index, Relative Variant Density, and Tajima’s D. Hence we focused our efforts on this tool. The scope of our work includes plots of per-base genome coverage, Principal Coordinate Analysis (PCoA), integration with a read alignment viewer named LookSeq, and visualization of geo-biological data. In addition to description of embedded functionalities, significance, and limitations, future improvements are discussed. The result is four extensions embedded successfully in the original tool, which is built on the Qt framework in C++. Hence it is portable to numerous platforms. Our extensions have shown acceptable execution time in a beta testing with various high-volume published datasets, as well as positive

  17. Principles of visual attention

    DEFF Research Database (Denmark)

    Bundesen, Claus; Habekost, Thomas

    research as a field that is fundamentally fragmented. This book takes a different perspective and presents a unified theory of visual attention: the TVA model. The TVA model explains the many aspects of visual attention by just two mechanisms for selection of information: filtering and pigeonholing......The nature of attention is one of the oldest and most central problems in psychology. A huge amount of research has been produced on this subject in the last half century, especially on attention in the visual modality, but a general explanation has remained elusive. Many still view attention....... These mechanisms are described in a set of simple equations, which allow TVA to mathematically model a large number of classical results in the attention literature. The theory explains psychological and neuroscientific findings by the same equations; TVA is a complete theory of visual attention, linking mind...

  18. Visual Constructive and Visual-Motor Skills in Deaf Native Signers

    Science.gov (United States)

    Hauser, Peter C.; Cohen, Julie; Dye, Matthew W. G.; Bavelier, Daphne

    2007-01-01

    Visual constructive and visual-motor skills in the deaf population were investigated by comparing performance of deaf native signers (n = 20) to that of hearing nonsigners (n = 20) on the Beery-Buktenica Developmental Test of Visual-Motor Integration, Rey-Osterrieth Complex Figure Test, Wechsler Memory Scale Visual Reproduction subtest, and…

  19. VisComposer: A Visual Programmable Composition Environment for Information Visualization

    Directory of Open Access Journals (Sweden)

    Honghui Mei

    2018-03-01

    Full Text Available As the amount of data being collected has increased, the need for tools that can enable the visual exploration of data has also grown. This has led to the development of a variety of widely used programming frameworks for information visualization. Unfortunately, such frameworks demand comprehensive visualization and coding skills and require users to develop visualization from scratch. An alternative is to create interactive visualization design environments that require little to no programming. However, these tools only supports a small portion of visual forms.We present a programmable integrated development environment (IDE, VisComposer, that supports the development of expressive visualization using a drag-and-drop visual interface. VisComposer exposes the programmability by customizing desired components within a modularized visualization composition pipeline, effectively balancing the capability gap between expert coders and visualization artists. The implemented system empowers users to compose comprehensive visualizations with real-time preview and optimization features, and supports prototyping, sharing and reuse of the effects by means of an intuitive visual composer. Visual programming and textual programming integrated in our system allow users to compose more complex visual effects while retaining the simplicity of use. We demonstrate the performance of VisComposer with a variety of examples and an informal user evaluation. Keywords: Information Visualization, Visualization authoring, Interactive development environment

  20. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  1. Differential neural network configuration during human path integration

    Science.gov (United States)

    Arnold, Aiden E. G. F; Burles, Ford; Bray, Signe; Levy, Richard M.; Iaria, Giuseppe

    2014-01-01

    Path integration is a fundamental skill for navigation in both humans and animals. Despite recent advances in unraveling the neural basis of path integration in animal models, relatively little is known about how path integration operates at a neural level in humans. Previous attempts to characterize the neural mechanisms used by humans to visually path integrate have suggested a central role of the hippocampus in allowing accurate performance, broadly resembling results from animal data. However, in recent years both the central role of the hippocampus and the perspective that animals and humans share similar neural mechanisms for path integration has come into question. The present study uses a data driven analysis to investigate the neural systems engaged during visual path integration in humans, allowing for an unbiased estimate of neural activity across the entire brain. Our results suggest that humans employ common task control, attention and spatial working memory systems across a frontoparietal network during path integration. However, individuals differed in how these systems are configured into functional networks. High performing individuals were found to more broadly express spatial working memory systems in prefrontal cortex, while low performing individuals engaged an allocentric memory system based primarily in the medial occipito-temporal region. These findings suggest that visual path integration in humans over short distances can operate through a spatial working memory system engaging primarily the prefrontal cortex and that the differential configuration of memory systems recruited by task control networks may help explain individual biases in spatial learning strategies. PMID:24808849

  2. Visual communication in the psychoanalytic situation.

    Science.gov (United States)

    Kanzer, M

    1980-01-01

    The relationship between verbal and visual aspects of the analytic proceedings shows them blended integrally in the experiences of both patient and analyst and in contributing to the insights derived during the treatment. Areas in which the admixture of the verbal and visual occur are delineated. Awareness of the visual aspects gives substance to the operations of empathy, intuition, acting out, working through, etc. Some typical features of visual 'language" are noted and related to the analytic situation. As such they can be translated with the use of logic and consciousness on the analyst's part, not mere random eruptions of intuition. The original significance of dreams as a royal road to the unconscious is confirmed-but we also find in them insights to be derived with higher mental processes. Finally, dyadic aspects of the formation and aims of dreams during analysis are pointed out, with important implications for the analyst's own self-supervision of his techniques and 'real personality" and their effects upon the patient. how remarkable that Dora's dreams, all too belatedly teaching Freud about their transference implications, still have so much more to communicate that derives from his capacity to record faithfully observations he was not yet ready to explain.

  3. Toward an Integrative Theoretical Framework for Explaining Beliefs about Wife Beating: A Study among Students of Nursing from Turkey

    Science.gov (United States)

    Haj-Yahia, Muhammad M.; Uysal, Aynur

    2011-01-01

    An integrative theoretical framework was tested as the basis for explaining beliefs about wife beating among Turkish nursing students. Based on a survey design, 406 nursing students (404 females) in all 4 years of undergraduate studies completed a self-administered questionnaire. Questionnaires were distributed and collected from the participants…

  4. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    Science.gov (United States)

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  5. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  6. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    OpenAIRE

    Akristiniy Vera A.; Dikova Elena A.

    2018-01-01

    The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account t...

  7. Food recognition and recipe analysis: integrating visual content, context and external knowledge

    OpenAIRE

    Herranz, Luis; Min, Weiqing; Jiang, Shuqiang

    2018-01-01

    The central role of food in our individual and social life, combined with recent technological advances, has motivated a growing interest in applications that help to better monitor dietary habits as well as the exploration and retrieval of food-related information. We review how visual content, context and external knowledge can be integrated effectively into food-oriented applications, with special focus on recipe analysis and retrieval, food recommendation, and the restaurant context as em...

  8. Collinear integration affects visual search at V1.

    Science.gov (United States)

    Chow, Hiu Mei; Jingling, Li; Tseng, Chia-huei

    2013-08-29

    Perceptual grouping plays an indispensable role in figure-ground segregation and attention distribution. For example, a column pops out if it contains element bars orthogonal to uniformly oriented element bars. Jingling and Tseng (2013) have reported that contextual grouping in a column matters to visual search behavior: When a column is grouped into a collinear (snakelike) structure, a target positioned on it became harder to detect than on other noncollinear (ladderlike) columns. How and where perceptual grouping interferes with selective attention is still largely unknown. This article contributes to this little-studied area by asking whether collinear contour integration interacts with visual search before or after binocular fusion. We first identified that the previously mentioned search impairment occurs with a distractor of five or nine elements but not one element in a 9 × 9 search display. To pinpoint the site of this effect, we presented the search display with a short collinear bar (one element) to one eye and the extending collinear bars to the other eye, such that when properly fused, the combined binocular collinear length (nine elements) exceeded the critical length. No collinear search impairment was observed, implying that collinear information before binocular fusion shaped participants' search behavior, although contour extension from the other eye after binocular fusion enhanced the effect of collinearity on attention. Our results suggest that attention interacts with perceptual grouping as early as V1.

  9. Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration.

    Science.gov (United States)

    Doehrmann, Oliver; Naumer, Marcus J

    2008-11-25

    By using meaningful stimuli, multisensory research has recently started to investigate the impact of stimulus content on crossmodal integration. Variations in this respect have often been termed as "semantic". In this paper we will review work related to the question for which tasks the influence of semantic factors has been found and which cortical networks are most likely to mediate these effects. More specifically, the focus of this paper will be on processing of object stimuli presented in the auditory and visual sensory modalities. Furthermore, we will investigate which cortical regions are particularly responsive to experimental variations of content by comparing semantically matching ("congruent") and mismatching ("incongruent") experimental conditions. In this context, recent neuroimaging studies point toward a possible functional differentiation of temporal and frontal cortical regions, with the former being more responsive to semantically congruent and the latter to semantically incongruent audio-visual (AV) stimulation. To account for these differential effects, we will suggest in the final section of this paper a possible synthesis of these data on semantic modulation of AV integration with findings from neuroimaging studies and theoretical accounts of semantic memory.

  10. Immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention: a mismatch negativity study.

    Science.gov (United States)

    Li, X; Yang, Y; Ren, G

    2009-06-16

    Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.

  11. Kinesthetic information disambiguates visual motion signals.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Visual integration dysfunction in schizophrenia arises by the first psychotic episode and worsens with illness duration

    OpenAIRE

    Keane, Brian P.; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M.

    2016-01-01

    Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on th...

  13. Constructing visual representations

    DEFF Research Database (Denmark)

    Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh

    2014-01-01

    tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

  14. An integrated domain specific language for post-processing and visualizing electrophysiological signals in Java.

    Science.gov (United States)

    Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R

    2010-01-01

    Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.

  15. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  16. Research on fine management and visualization of ancient architectures based on integration of 2D and 3D GIS technology

    International Nuclear Information System (INIS)

    Jun, Yan; Shaohua, Wang; Jiayuan, Li; Qingwu, Hu

    2014-01-01

    Aimed at ancient architectures which own the characteristics of huge data quantity, fine-grained and high-precise, a 3D fine management and visualization method for ancient architectures based on the integration of 2D and 3D GIS is proposed. Firstly, after analysing various data types and characters of digital ancient architectures, main problems and key technologies existing in the 2D and 3D data management are discussed. Secondly, data storage and indexing model of digital ancient architecture based on 2D and 3D GIS integration were designed and the integrative storage and management of 2D and 3D data were achieved. Then, through the study of data retrieval method based on the space-time indexing and hierarchical object model of ancient architecture, 2D and 3D interaction of fine-grained ancient architectures 3D models was achieved. Finally, take the fine database of Liangyi Temple belonging to Wudang Mountain as an example, fine management and visualization prototype of 2D and 3D integrative digital ancient buildings of Liangyi Temple was built and achieved. The integrated management and visual analysis of 10GB fine-grained model of the ancient architecture was realized and a new implementation method for the store, browse, reconstruction, and architectural art research of ancient architecture model was provided

  17. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    Science.gov (United States)

    Akristiniy, Vera A.; Dikova, Elena A.

    2018-03-01

    The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.

  18. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    Directory of Open Access Journals (Sweden)

    Akristiniy Vera A.

    2018-01-01

    Full Text Available The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.

  19. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    Science.gov (United States)

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  20. Visual and Haptic Mental Rotation

    Directory of Open Access Journals (Sweden)

    Satoshi Shioiri

    2011-10-01

    Full Text Available It is well known that visual information can be retained in several types of memory systems. Haptic information can also be retained in a memory because we can repeat a hand movement. There may be a common memory system for vision and action. On the one hand, it may be convenient to have a common system for acting with visual information. On the other hand, different modalities may have their own memory and use retained information without transforming specific to the modality. We compared memory properties of visual and haptic information. There is a phenomenon known as mental rotation, which is possibly unique to visual representation. The mental rotation is a phenomenon where reaction time increases with the angle of visual target (eg,, a letter to identify. The phenomenon is explained by the difference in time to rotate the representation of the target in the visual sytem. In this study, we compared the effect of stimulus angle on visual and haptic shape identification (two-line shapes were used. We found that a typical effect of mental rotation for the visual stimulus. However, no such effect was found for the haptic stimulus. This difference cannot be explained by the modality differences in response because similar difference was found even when haptical response was used for visual representation and visual response was used for haptic representation. These results indicate that there are independent systems for visual and haptic representations.

  1. Possible role for fundus autofluorescence as a predictive factor for visual acuity recovery after epiretinal membrane surgery.

    Science.gov (United States)

    Brito, Pedro N; Gomes, Nuno L; Vieira, Marco P; Faria, Pedro A; Fernandes, Augusto V; Rocha-Sousa, Amândio; Falcão-Reis, Fernando

    2014-02-01

    To study the potential association between fundus autofluorescence, spectral-domain optical coherence tomography, and visual acuity in patients undergoing surgery because of epiretinal membranes. Prospective, interventional case series including 26 patients submitted to vitrectomy because of symptomatic epiretinal membranes. Preoperative evaluation consisted of a complete ophthalmologic examination, autofluorescence, and spectral-domain optical coherence tomography. Studied variables included foveal autofluorescence (fov.AF), photoreceptor inner segment/outer segment (IS/OS) junction line integrity, external limiting membrane integrity, central foveal thickness, and foveal morphology. All examinations were repeated at the first, third, and sixth postoperative months. The main outcome measures were logarithm of minimal angle resolution visual acuity, fov.AF integrity, and IS/OS integrity. All cases showing a continuous IS/OS line had an intact fov.AF, whereas patients with IS/OS disruption could have either an increased area of foveal hypoautofluorescence or an intact fov.AF, with the latter being associated with IS/OS integrity recovery in follow-up spectral-domain optical coherence tomography imaging. The only preoperative variables presenting a significant correlation with final visual acuity were baseline visual acuity (P = 0.047) and fov.AF grade (P = 0.023). Recovery of IS/OS line integrity after surgery, in patients with preoperative IS/OS disruption and normal fov.AF, can be explained by the presence of a functional retinal pigment epithelium-photoreceptor complex, supporting normal photoreceptor activity. Autofluorescence imaging provides a functional component to the study of epiretinal membranes, complementing the structural information obtained with optical coherence tomography.

  2. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  3. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  4. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.

    Science.gov (United States)

    Mader, Malte; Simon, Ronald; Kurtz, Stefan

    2014-03-31

    A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.

  5. Integration of intraoperative stereovision imaging for brain shift visualization during image-guided cranial procedures

    Science.gov (United States)

    Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Roberts, David W.; Paulsen, Keith D.; Simon, David A.

    2014-03-01

    Dartmouth and Medtronic Navigation have established an academic-industrial partnership to develop, validate, and evaluate a multi-modality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. A stereovision system has been developed and optimized for intraoperative use through integration with a surgical microscope and an image-guided surgery system. The microscope optics and stereovision CCD sensors are localized relative to the surgical field using optical tracking and can efficiently acquire stereo image pairs from which a localized 3D profile of the exposed surface is reconstructed. This paper reports the first demonstration of intraoperative acquisition, reconstruction and visualization of 3D stereovision surface data in the context of an industry-standard image-guided surgery system. The integrated system is capable of computing and presenting a stereovision-based update of the exposed cortical surface in less than one minute. Alternative methods for visualization of high-resolution, texture-mapped stereovision surface data are also investigated with the objective of determining the technical feasibility of direct incorporation of intraoperative stereo imaging into future iterations of Medtronic's navigation platform.

  6. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy

    Directory of Open Access Journals (Sweden)

    Chang WC

    2014-06-01

    Full Text Available Wen-Chung Chang,1,* Chin-Sheng Chen,2,* Hung-Chi Tai,3 Chia-Yuan Liu,4,5 Yu-Jen Chen3 1Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan; 2Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei, Taiwan; 3Department of Radiation Oncology, Mackay Memorial Hospital, Taipei, Taiwan; 4Department of Internal Medicine, Mackay Memorial Hospital, Taipei, Taiwan; 5Department of Medicine, Mackay Medical College, New Taipei City, Taiwan  *These authors contributed equally to this work Abstract: The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US and computed tomography (CT scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy. Keywords: ultrasound, computerized tomography

  7. Effects of temporal integration on the shape of visual backward masking functions.

    Science.gov (United States)

    Francis, Gregory; Cho, Yang Seok

    2008-10-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be monotonic increasing but other times is U-shaped. Theories of backward masking have long hypothesized that temporal integration of the target and mask influences properties of masking but have not connected the influence of integration with the shape of the masking function. With two experiments that vary the spatial properties of the target and mask, the authors provide evidence that temporal integration of the stimuli plays a critical role in determining the shape of the masking function. The resulting data both challenge current theories of backward masking and indicate what changes to the theories are needed to account for the new data. The authors further discuss the implication of the findings for uses of backward masking to explore other aspects of cognition.

  8. The effect of visual scanning exercises integrated into physiotherapy in patients with unilateral spatial neglect poststroke: a matched-pair randomized control trial.

    Science.gov (United States)

    van Wyk, Andoret; Eksteen, Carina A; Rheeder, Paul

    2014-01-01

    Unilateral spatial neglect (USN) is a visual-perceptual disorder that entails the inability to perceive and integrate stimuli on one side of the body, resulting in the neglect of one side of the body. Stroke patients with USN present with extensive functional disability and duration of therapy input. To determine the effect of saccadic eye movement training with visual scanning exercises (VSEs) integrated with task-specific activities on USN poststroke. A matched-pair randomized control trial was conducted. Subjects were matched according to their functional activity level and allocated to either a control (n = 12) or an experimental group (n = 12). All patients received task-specific activities for a 4-week intervention period. The experimental group received saccadic eye movement training with VSE integrated with task specific activities as an "add on" intervention. Assessments were conducted weekly over the intervention period. Statistical significant difference was noted on the King-Devick Test (P = .021), Star Cancellation Test (P = .016), and Barthel Index (P = .004). Intensive saccadic eye movement training with VSE integrated with task-specific activities has a significant effect on USN in patients poststroke. Results of this study are supported by findings from previously reviewed literature in the sense that the effect of saccadic eye movement training with VSE as an intervention approach has a significant effect on the visual perceptual processing of participants with USN poststroke. The significant improved visual perceptual processing translate to significantly better visual function and ability to perform activities of daily living following the stroke. © The Author(s) 2014.

  9. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  10. Visualize This The FlowingData Guide to Design, Visualization, and Statistics

    CERN Document Server

    Yau, Nathan

    2011-01-01

    Practical data design tips from a data visualization expert of the modern age Data doesn?t decrease; it is ever-increasing and can be overwhelming to organize in a way that makes sense to its intended audience. Wouldn?t it be wonderful if we could actually visualize data in such a way that we could maximize its potential and tell a story in a clear, concise manner? Thanks to the creative genius of Nathan Yau, we can. With this full-color book, data visualization guru and author Nathan Yau uses step-by-step tutorials to show you how to visualize and tell stories with data. He explains how to ga

  11. An integrative framework of stress, attention, and visuomotor performance

    Directory of Open Access Journals (Sweden)

    Samuel James Vine

    2016-11-01

    Full Text Available The aim of this article is to present an integrative conceptual framework that depicts the effect of acute stress on the performance of visually guided motor skills. We draw upon seminal theories highlighting the importance of subjective interpretations of stress on subsequent performance and outline how models of disrupted attentional control might explain this effect through impairments in visuomotor control. We first synthesize and critically discuss empirical support for theories examining these relationships in isolation. We then outline our integrative framework that seeks to provide a more complete picture of the interacting influences of stress responses (challenge and threat and attention in explaining how elevated stress may lead to different visuomotor performance outcomes. We propose a number of mechanisms that explain why evaluations of stress are related to attentional control, and highlight the emotion of anxiety as the most likely candidate to explain why negative reactions to stress lead to disrupted attention and poor visuomotor skill performance. Finally, we propose a number of feedback loops that explain why stress responses are often self-perpetuating, as well as a number of proposed interventions that are designed to help improve or maintain performance in real world performance environments (e.g., sport, surgery, military, and aviation.

  12. Visual Literacy and Message Design

    Science.gov (United States)

    Pettersson, Rune

    2009-01-01

    Many researchers from different disciplines have explained their views and interpretations and written about visual literacy from their various perspectives. Visual literacy may be applied in almost all areas such as advertising, anatomy, art, biology, business presentations, communication, education, engineering, etc. (Pettersson, 2002a). Despite…

  13. Helping To Integrate The Visually Challenged Into Mainstream Society Through A Low-Cost Braille Device

    Directory of Open Access Journals (Sweden)

    Desirée Jordan

    2013-06-01

    Full Text Available The visually challenged are often alienated from mainstream society because of their disabilities. This problem is even more pronounced in developing countries which often do not have the resources necessary to integrate this people group into their communities or even help them to become independent. It should therefore be the aim of governments in developing countries to provide this vulnerable people group with access to assistive technologies at a low cost. This paper describes an ongoing project that aims to provide low-cost assistive technologies to the visually challenged in Barbados. As a part of this project a study was conducted on a sample of visually challenged members of the Barbados Association for the Blind and Deaf to determine their ICT skills, knowledge of Braille and their use of assistive technologies. An analysis of the results prompted the design and creation of a low-cost Braille device prototype. The cost of this prototype was about one-half that of a commercially available device and can be used without a screen reader. This device should help create equal opportunities for the visually challenged in Barbados and other developing countries. It should also allow the visually challenged to become more independent.

  14. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    Science.gov (United States)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows

  15. IVAG: An Integrative Visualization Application for Various Types of Genomic Data Based on R-Shiny and the Docker Platform.

    Science.gov (United States)

    Lee, Tae-Rim; Ahn, Jin Mo; Kim, Gyuhee; Kim, Sangsoo

    2017-12-01

    Next-generation sequencing (NGS) technology has become a trend in the genomics research area. There are many software programs and automated pipelines to analyze NGS data, which can ease the pain for traditional scientists who are not familiar with computer programming. However, downstream analyses, such as finding differentially expressed genes or visualizing linkage disequilibrium maps and genome-wide association study (GWAS) data, still remain a challenge. Here, we introduce a dockerized web application written in R using the Shiny platform to visualize pre-analyzed RNA sequencing and GWAS data. In addition, we have integrated a genome browser based on the JBrowse platform and an automated intermediate parsing process required for custom track construction, so that users can easily build and navigate their personal genome tracks with in-house datasets. This application will help scientists perform series of downstream analyses and obtain a more integrative understanding about various types of genomic data by interactively visualizing them with customizable options.

  16. Modelling audiovisual integration of affect from videos and music.

    Science.gov (United States)

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  17. Subcortical orientation biases explain orientation selectivity of visual cortical cells.

    Science.gov (United States)

    Vidyasagar, Trichur R; Jayakumar, Jaikishan; Lloyd, Errol; Levichkina, Ekaterina V

    2015-04-01

    The primary visual cortex of carnivores and primates shows an orderly progression of domains of neurons that are selective to a particular orientation of visual stimuli such as bars and gratings. We recorded from single-thalamic afferent fibers that terminate in these domains to address the issue whether the orientation sensitivity of these fibers could form the basis of the remarkable orientation selectivity exhibited by most cortical cells. We first performed optical imaging of intrinsic signals to obtain a map of orientation domains on the dorsal aspect of the anaesthetized cat's area 17. After confirming using electrophysiological recordings the orientation preferences of single neurons within one or two domains in each animal, we pharmacologically silenced the cortex to leave only the afferent terminals active. The inactivation of cortical neurons was achieved by the superfusion of either kainic acid or muscimol. Responses of single geniculate afferents were then recorded by the use of high impedance electrodes. We found that the orientation preferences of the afferents matched closely with those of the cells in the orientation domains that they terminated in (Pearson's r = 0.633, n = 22, P = 0.002). This suggests a possible subcortical origin for cortical orientation selectivity. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  18. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    Science.gov (United States)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  19. Lack of color integration in visual short-term memory binding.

    Science.gov (United States)

    Parra, Mario A; Cubelli, Roberto; Della Sala, Sergio

    2011-10-01

    Bicolored objects are retained in visual short-term memory (VSTM) less efficiently than unicolored objects. This is unlike shape-color combinations, whose retention in VSTM does not differ from that observed for shapes only. It is debated whether this is due to a lack of color integration and whether this may reflect the function of separate memory mechanisms. Participants judged whether the colors of bicolored objects (each with an external and an internalcolor) were the same or different across two consecutive screens. Colors had to be remembered either individually or in combination. In Experiment 1, external colors in the combined colors condition were remembered better than the internal colors, and performance for both was worse than that in the individual colors condition. The lack of color integration observed in Experiment 1 was further supported by a reduced capacity of VSTM to retain color combinations, relative to individual colors (Experiment 2). An additional account was found in Experiment 3, which showed spared color-color binding in the presence of impaired shape-color binding in a brain-damaged patient, thus suggesting that these two memory mechanisms are different.

  20. Visual Communication: Integrating Visual Instruction into Business Communication Courses

    Science.gov (United States)

    Baker, William H.

    2006-01-01

    Business communication courses are ideal for teaching visual communication principles and techniques. Many assignments lend themselves to graphic enrichment, such as flyers, handouts, slide shows, Web sites, and newsletters. Microsoft Publisher and Microsoft PowerPoint are excellent tools for these assignments, with Publisher being best for…

  1. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  2. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  3. Experiences of Individuals With Visual Impairments in Integrated Physical Education: A Retrospective Study.

    Science.gov (United States)

    Haegele, Justin A; Zhu, Xihe

    2017-12-01

    The purpose of this retrospective study was to examine the experiences of adults with visual impairments during school-based integrated physical education (PE). An interpretative phenomenological analysis (IPA) research approach was used and 16 adults (ages 21-48 years; 10 women, 6 men) with visual impairments acted as participants for this study. The primary sources of data were semistructured audiotaped telephone interviews and reflective field notes, which were recorded during and immediately following each interview. Thematic development was undertaken utilizing a 3-step analytical process guided by IPA. Based on the data analysis, 3 interrelated themes emerged from the participant transcripts: (a) feelings about "being put to the side," frustration and inadequacy; (b) "She is blind, she can't do it," debilitating feelings from physical educators' attitudes; and (c) "not self-esteem raising," feelings about peer interactions. The 1st theme described the participants' experiences and ascribed meaning to exclusionary practices. The 2nd theme described the participants' frustration over being treated differently by their PE teachers because of their visual impairments. Lastly, "not self-esteem raising," feelings about peer interactions demonstrated how participants felt about issues regarding challenging social situations with peers in PE. Utilizing an IPA approach, the researchers uncovered 3 interrelated themes that depicted central feelings, experiences, and reflections, which informed the meaning of the participants' PE experiences. The emerged themes provide unique insight into the embodied experiences of those with visual impairments in PE and fill a previous gap in the extant literature.

  4. Visual feature integration indicated by pHase-locked frontal-parietal EEG signals.

    Science.gov (United States)

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30-38 Hz, 175-250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information.

  5. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    Directory of Open Access Journals (Sweden)

    Anne-Sophie Darmaillacq

    2017-06-01

    Full Text Available Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  6. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish.

    Science.gov (United States)

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E; Dickel, Ludovic

    2017-01-01

    Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e -vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  7. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    Science.gov (United States)

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  8. Oak Ridge Bio-surveillance Toolkit (ORBiT): Integrating Big-Data Analytics with Visual Analysis for Public Health Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ramanathan, Arvind [ORNL; Pullum, Laura L [ORNL; Steed, Chad A [ORNL; Chennubhotla, Chakra [University of Pittsburgh School of Medicine, Pittsburgh PA; Quinn, Shannon [University of Pittsburgh School of Medicine, Pittsburgh PA

    2013-01-01

    In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the data analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.

  9. Contextual interactions in grating plaid configurations are explained by natural image statistics and neural modeling

    Directory of Open Access Journals (Sweden)

    Udo Alexander Ernst

    2016-10-01

    Full Text Available Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2x2 grating patch arrangements (plaids, which differed in spatial frequency composition (low, high or mixed, number of grating patch co-alignments (0, 1 or 2, and inter-patch distances (1° and 2° of visual angle. Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained

  10. Long-term musical training may improve different forms of visual attention ability.

    Science.gov (United States)

    Rodrigues, Ana Carolina; Loureiro, Maurício Alves; Caramelli, Paulo

    2013-08-01

    Many studies have suggested that structural and functional cerebral neuroplastic processes result from long-term musical training, which in turn may produce cognitive differences between musicians and non-musicians. We aimed to investigate whether intensive, long-term musical practice is associated with improvements in three different forms of visual attention ability: selective, divided and sustained attention. Musicians from symphony orchestras (n=38) and non-musicians (n=38), who were comparable in age, gender and education, were submitted to three neuropsychological tests, measuring reaction time and accuracy. Musicians showed better performance relative to non-musicians on four variables of the three visual attention tests, and such an advantage could not solely be explained by better sensorimotor integration. Moreover, in the group of musicians, significant correlations were observed between the age at the commencement of musical studies and reaction time in all visual attention tests. The results suggest that musicians present augmented ability in different forms of visual attention, thus illustrating the possible cognitive benefits of long-term musical training. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Visualization of the Eastern Renewable Generation Integration Study: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron

    2016-12-01

    The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.

  12. Duration estimates within a modality are integrated sub-optimally

    Directory of Open Access Journals (Sweden)

    Ming Bo eCai

    2015-08-01

    Full Text Available Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task.

  13. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  14. Pedagogical Praxis Surrounding the Integration of Photography, Visual Literacy, Digital Literacy, and Educational Technology into Business Education Classrooms: A Focus Group Study

    Science.gov (United States)

    Schlosser, Peter Allen

    2010-01-01

    This paper reports on an investigation into how Marketing and Business Education Teachers utilize and integrate educational technology into curriculum through the use of photography. The ontology of this visual, technological, and language interface is explored with an eye toward visual literacy, digital literacy, and pedagogical praxis, focusing…

  15. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    Science.gov (United States)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  16. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2017-02-01

    Full Text Available Audiovisual speech integration combines information from auditory speech (talker's voice and visual speech (talker's mouth movements to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga, that are integrated to produce a fused percept ("da". This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba. We describe a simplified model of causal inference in multisensory speech perception (CIMS that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  17. Integration of anatomical and external response mappings explains crossing effects in tactile localization: A probabilistic modeling approach.

    Science.gov (United States)

    Badde, Stephanie; Heed, Tobias; Röder, Brigitte

    2016-04-01

    To act upon a tactile stimulus its original skin-based, anatomical spatial code has to be transformed into an external, posture-dependent reference frame, a process known as tactile remapping. When the limbs are crossed, anatomical and external location codes are in conflict, leading to a decline in tactile localization accuracy. It is unknown whether this impairment originates from the integration of the resulting external localization response with the original, anatomical one or from a failure of tactile remapping in crossed postures. We fitted probabilistic models based on these diverging accounts to the data from three tactile localization experiments. Hand crossing disturbed tactile left-right location choices in all experiments. Furthermore, the size of these crossing effects was modulated by stimulus configuration and task instructions. The best model accounted for these results by integration of the external response mapping with the original, anatomical one, while applying identical integration weights for uncrossed and crossed postures. Thus, the model explained the data without assuming failures of remapping. Moreover, performance differences across tasks were accounted for by non-individual parameter adjustments, indicating that individual participants' task adaptation results from one common functional mechanism. These results suggest that remapping is an automatic and accurate process, and that the observed localization impairments in touch result from a cognitively controlled integration process that combines anatomically and externally coded responses.

  18. What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory.

    Science.gov (United States)

    Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-12-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.

  19. Integration of genomic information with biological networks using Cytoscape.

    Science.gov (United States)

    Bauer-Mehren, Anna

    2013-01-01

    Cytoscape is an open-source software for visualizing, analyzing, and modeling biological networks. This chapter explains how to use Cytoscape to analyze the functional effect of sequence variations in the context of biological networks such as protein-protein interaction networks and signaling pathways. The chapter is divided into five parts: (1) obtaining information about the functional effect of sequence variation in a Cytoscape readable format, (2) loading and displaying different types of biological networks in Cytoscape, (3) integrating the genomic information (SNPs and mutations) with the biological networks, and (4) analyzing the effect of the genomic perturbation onto the network structure using Cytoscape built-in functions. Finally, we briefly outline how the integrated data can help in building mathematical network models for analyzing the effect of the sequence variation onto the dynamics of the biological system. Each part is illustrated by step-by-step instructions on an example use case and visualized by many screenshots and figures.

  20. Imagining Change: An Integrative Approach toward Explaining the Motivational Role of Mental Imagery in Pro-environmental Behavior

    Science.gov (United States)

    Boomsma, Christine; Pahl, Sabine; Andrade, Jackie

    2016-01-01

    Climate change and other long-term environmental issues are often perceived as abstract and difficult to imagine. The images a person associates with environmental change, i.e., a person’s environmental mental images, can be influenced by the visual information they come across in the public domain. This paper reviews the literature on this topic across social, environmental, and cognitive psychology, and the wider social sciences; thereby responding to a call for more critical investigations into people’s responses to visual information. By integrating the literature we come to a better understanding of the lack in vivid and concrete environmental mental imagery reported by the public, the link between environmental mental images and goals, and how affectively charged external images could help in making mental imagery less abstract. Preliminary research reports on the development of a new measure of environmental mental imagery and three tests of the relationship between environmental mental imagery, pro-environmental goals and behavior. Furthermore, the paper provides a program of research, drawing upon approaches from different disciplines, to set out the next steps needed to examine how and why we should encourage the public to imagine environmental change. PMID:27909415

  1. Visual Analytics and Storytelling through Video

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.; Foote, Harlan P.; Thomas, Jim

    2005-10-31

    This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.

  2. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  3. PathText: a text mining integrator for biological pathway visualizations

    Science.gov (United States)

    Kemper, Brian; Matsuzaki, Takuya; Matsuoka, Yukiko; Tsuruoka, Yoshimasa; Kitano, Hiroaki; Ananiadou, Sophia; Tsujii, Jun'ichi

    2010-01-01

    Motivation: Metabolic and signaling pathways are an increasingly important part of organizing knowledge in systems biology. They serve to integrate collective interpretations of facts scattered throughout literature. Biologists construct a pathway by reading a large number of articles and interpreting them as a consistent network, but most of the models constructed currently lack direct links to those articles. Biologists who want to check the original articles have to spend substantial amounts of time to collect relevant articles and identify the sections relevant to the pathway. Furthermore, with the scientific literature expanding by several thousand papers per week, keeping a model relevant requires a continuous curation effort. In this article, we present a system designed to integrate a pathway visualizer, text mining systems and annotation tools into a seamless environment. This will enable biologists to freely move between parts of a pathway and relevant sections of articles, as well as identify relevant papers from large text bases. The system, PathText, is developed by Systems Biology Institute, Okinawa Institute of Science and Technology, National Centre for Text Mining (University of Manchester) and the University of Tokyo, and is being used by groups of biologists from these locations. Contact: brian@monrovian.com. PMID:20529930

  4. ICT Integration in Mathematics Initial Teacher Training and Its Impact on Visualization: The Case of GeoGebra

    Science.gov (United States)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions…

  5. Comparison of animated jet stream visualizations

    Science.gov (United States)

    Nocke, Thomas; Hoffmann, Peter

    2016-04-01

    The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).

  6. Can responses to basic non-numerical visual features explain neural numerosity responses?

    NARCIS (Netherlands)

    Harvey, Ben M; Dumoulin, Serge O

    2017-01-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and

  7. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  8. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    Science.gov (United States)

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  9. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 1 : summary report.

    Science.gov (United States)

    2009-12-01

    The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge : Engineers at the state and local level from the following aspects: : Better understanding and enforcement of a complex ...

  10. Rod phototransduction determines the trade-off of temporal integration and speed of vision in dark-adapted toads.

    Science.gov (United States)

    Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian

    2009-05-06

    Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.

  11. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  12. Software attribute visualization for high integrity software

    Energy Technology Data Exchange (ETDEWEB)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  13. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    Science.gov (United States)

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Visual updating across saccades by working memory integration

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing, by shifting externally-stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  15. Integration of Audio Visual Multimedia for Special Education Pre-Service Teachers' Self Reflections in Developing Teaching Competencies

    Science.gov (United States)

    Sediyani, Tri; Yufiarti; Hadi, Eko

    2017-01-01

    This study aims to develop a model of learning by integrating multimedia and audio-visual self-reflective learners. This multimedia was developed as a tool for prospective teachers as learners in the education of children with special needs to reflect on their teaching competencies before entering the world of education. Research methods to…

  16. Explaining Physical Activity in Children with Visual Impairments: A Family Systems Approach

    Science.gov (United States)

    Ayvazoglu, Nalan R.; Oh, Hyun-Kyoung; Kozub, Francis M.

    2006-01-01

    Using a mixed design this study explored physical activity in children with visual impairments from a family perspective. Quantitative findings revealed varied amounts of physical activity; younger children were more active than older participants. Further, parents were involved in moderate to vigorous physical activity 0% to 21% of the time when…

  17. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Interactions of visual attention and quality perception

    NARCIS (Netherlands)

    Redi, J.A.; Liu, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Several attempts to integrate visual saliency information in quality metrics are described in literature, albeit with contradictory results. The way saliency is integrated in quality metrics should reflect the mechanisms underlying the interaction between image quality assessment and visual

  19. Using Culture to Explain Behavior: An Integrative Cultural Approach

    Science.gov (United States)

    Shepherd, Hana R.; Stephens, Nicole M.

    2010-01-01

    While savings rates among low-income families vary greatly, a 2008 National Poverty Center report finds that over 40 percent of low-income families fail to save any money. For decades policy makers and social scientists have sought to explain this phenomenon. Even after accounting for the fact that low-income families have less money to save, why…

  20. Visual Electricity Demonstrator

    Science.gov (United States)

    Lincoln, James

    2017-09-01

    The Visual Electricity Demonstrator (VED) is a linear diode array that serves as a dynamic alternative to an ammeter. A string of 48 red light-emitting diodes (LEDs) blink one after another to create the illusion of a moving current. Having the current represented visually builds an intuitive and qualitative understanding about what is happening in a circuit. In this article, I describe several activities for this device and explain how using this technology in the classroom can enhance the understanding and appreciation of physics.

  1. Improving Multisensor Positioning of Land Vehicles with Integrated Visual Odometry for Next-Generation Self-Driving Cars

    Directory of Open Access Journals (Sweden)

    Muhammed Tahsin Rahman

    2018-01-01

    Full Text Available For their complete realization, autonomous vehicles (AVs fundamentally rely on the Global Navigation Satellite System (GNSS to provide positioning and navigation information. However, in area such as urban cores, parking lots, and under dense foliage, which are all commonly frequented by AVs, GNSS signals suffer from blockage, interference, and multipath. These effects cause high levels of errors and long durations of service discontinuity that mar the performance of current systems. The prevalence of vision and low-cost inertial sensors provides an attractive opportunity to further increase the positioning and navigation accuracy in such GNSS-challenged environments. This paper presents enhancements to existing multisensor integration systems utilizing the inertial navigation system (INS to aid in Visual Odometry (VO outlier feature rejection. A scheme called Aided Visual Odometry (AVO is developed and integrated with a high performance mechanization architecture utilizing vehicle motion and orientation sensors. The resulting solution exhibits improved state covariance convergence and navigation accuracy, while reducing computational complexity. Experimental verification of the proposed solution is illustrated through three real road trajectories, over two different land vehicles, and using two low-cost inertial measurement units (IMUs.

  2. The T?lz Temporal Topography Study: Mapping the visual field across the life span. Part II: Cognitive factors shaping visual field maps

    OpenAIRE

    Poggel, Dorothe A.; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-01-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, an...

  3. The Creative Dimension of Visuality

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2013-01-01

    This essay reflects critically on the notion of visuality, a centrepiece of current theory on visual culture and its underlying idea of a structural ‘discursive determination’ of visual phenomena. Is the visual really to be addressed through the post-war heritage of discourse and representation...... analysis relying on language/linguistics as a model for explaining culture? More specifically, how can the – creative – novelty of visual culture be addressed by a notion of discourse? This essay will argue that the debate on visual culture is lacking with regard to discerning the creative dimension of its...... and the invisible’ to the notion of collective creativity and ‘the imaginary institution of society’ of Cornelius Castoriadis. In the theoretical relationship between Merleau-Ponty and Castoriadis it is possible to indicate a notion of visuality as a creative dimension....

  4. A hierarchy of timescales explains distinct effects of local inhibition of primary visual cortex and frontal eye fields.

    Science.gov (United States)

    Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B

    2016-09-06

    Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.

  5. Visual attention in posterior stroke

    DEFF Research Database (Denmark)

    Fabricius, Charlotte; Petersen, Anders; Iversen, Helle K

    Objective: Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere. However, attentional effects of more posterior lesions are less clear. The aim of this study was to characterize visual processing speed...... and apprehension span following posterior cerebral artery (PCA) stroke. We also relate these attentional parameters to visual word recognition, as previous studies have suggested that reduced visual speed and span may explain pure alexia. Methods: Nine patients with MR-verified focal lesions in the PCA......-territory (four left PCA; four right PCA; one bilateral, all >1 year post stroke) were compared to 25 controls using single case statistics. Visual attention was characterized by a whole report paradigm allowing for hemifield-specific speed and span measurements. We also characterized visual field defects...

  6. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    OpenAIRE

    Chie Takahashi; Simon J Watt

    2011-01-01

    Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009). Variations in tool geometry also affect the reliability (precision) of haptic size estimates, however, because they alter the change ...

  7. Direct Visual Editing of Node Attributes in Graphs

    Directory of Open Access Journals (Sweden)

    Christian Eichner

    2016-10-01

    Full Text Available There are many expressive visualization techniques for analyzing graphs. Yet, there is only little research on how existing visual representations can be employed to support data editing. An increasingly relevant task when working with graphs is the editing of node attributes. We propose an integrated visualize-and-edit approach to editing attribute values via direct interaction with the visual representation. The visualize part is based on node-link diagrams paired with attribute-dependent layouts. The edit part is as easy as moving nodes via drag-and-drop gestures. We present dedicated interaction techniques for editing quantitative as well as qualitative attribute data values. The benefit of our novel integrated approach is that one can directly edit the data while the visualization constantly provides feedback on the implications of the data modifications. Preliminary user feedback indicates that our integrated approach can be a useful complement to standard non-visual editing via external tools.

  8. Visual communication on social media Case: Suomen Partiolaiset

    OpenAIRE

    Tuominen, Enni

    2015-01-01

    The purpose of this study was to investigate what kind of visual messages the Central Association of Scouts and Guides in Finland use in their social media, how the messages are perceived and how they could be optimized. The theoretical part explains the key concepts of social media and how it is used among Finnish youth. The chosen social media platforms, Instagram and Twitter are also looked into, fol-lowed by chapters explaining the science of studying social media monitoring, visual m...

  9. GenomeCAT: a versatile tool for the analysis and integrative visualization of DNA copy number variants.

    Science.gov (United States)

    Tebel, Katrin; Boldt, Vivien; Steininger, Anne; Port, Matthias; Ebert, Grit; Ullmann, Reinhard

    2017-01-06

    The analysis of DNA copy number variants (CNV) has increasing impact in the field of genetic diagnostics and research. However, the interpretation of CNV data derived from high resolution array CGH or NGS platforms is complicated by the considerable variability of the human genome. Therefore, tools for multidimensional data analysis and comparison of patient cohorts are needed to assist in the discrimination of clinically relevant CNVs from others. We developed GenomeCAT, a standalone Java application for the analysis and integrative visualization of CNVs. GenomeCAT is composed of three modules dedicated to the inspection of single cases, comparative analysis of multidimensional data and group comparisons aiming at the identification of recurrent aberrations in patients sharing the same phenotype, respectively. Its flexible import options ease the comparative analysis of own results derived from microarray or NGS platforms with data from literature or public depositories. Multidimensional data obtained from different experiment types can be merged into a common data matrix to enable common visualization and analysis. All results are stored in the integrated MySQL database, but can also be exported as tab delimited files for further statistical calculations in external programs. GenomeCAT offers a broad spectrum of visualization and analysis tools that assist in the evaluation of CNVs in the context of other experiment data and annotations. The use of GenomeCAT does not require any specialized computer skills. The various R packages implemented for data analysis are fully integrated into GenomeCATs graphical user interface and the installation process is supported by a wizard. The flexibility in terms of data import and export in combination with the ability to create a common data matrix makes the program also well suited as an interface between genomic data from heterogeneous sources and external software tools. Due to the modular architecture the functionality of

  10. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    Directory of Open Access Journals (Sweden)

    Chie Takahashi

    2011-10-01

    Full Text Available Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009. Variations in tool geometry also affect the reliability (precision of haptic size estimates, however, because they alter the change in hand opening caused by a given change in object size. Here, we examine whether the brain appropriately adjusts the weights given to visual and haptic size signals when tool geometry changes. We first estimated each cue's reliability by measuring size-discrimination thresholds in vision-alone and haptics-alone conditions. We varied haptic reliability using tools with different object-size:hand-opening ratios (1:1, 0.7:1, and 1.4:1. We then measured the weights given to vision and haptics with each tool, using a cue-conflict paradigm. The weight given to haptics varied with tool type in a manner that was well predicted by the single-cue reliabilities (MLE model; Ernst and Banks, 2002. This suggests that the process of visual-haptic integration appropriately accounts for variations in haptic reliability introduced by different tool geometries.

  11. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  12. Knowledge and Perceptions of Visual Communications Curriculum in Arkansas Secondary Agricultural Classrooms: A Closer Look at Experiential Learning Integrations

    Science.gov (United States)

    Pennington, Kristin; Calico, Carley; Edgar, Leslie D.; Edgar, Don W.; Johnson, Donald M.

    2015-01-01

    The University of Arkansas developed and integrated visual communications curriculum related to agricultural communications into secondary agricultural programs throughout the state. The curriculum was developed, pilot tested, revised, and implemented by selected secondary agriculture teachers. The primary purpose of this study was to evaluate…

  13. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  14. LocusTrack: Integrated visualization of GWAS results and genomic annotation.

    Science.gov (United States)

    Cuellar-Partida, Gabriel; Renteria, Miguel E; MacGregor, Stuart

    2015-01-01

    Genome-wide association studies (GWAS) are an important tool for the mapping of complex traits and diseases. Visual inspection of genomic annotations may be used to generate insights into the biological mechanisms underlying GWAS-identified loci. We developed LocusTrack, a web-based application that annotates and creates plots of regional GWAS results and incorporates user-specified tracks that display annotations such as linkage disequilibrium (LD), phylogenetic conservation, chromatin state, and other genomic and regulatory elements. Currently, LocusTrack can integrate annotation tracks from the UCSC genome-browser as well as from any tracks provided by the user. LocusTrack is an easy-to-use application and can be accessed at the following URL: http://gump.qimr.edu.au/general/gabrieC/LocusTrack/. Users can upload and manage GWAS results and select from and/or provide annotation tracks using simple and intuitive menus. LocusTrack scripts and associated data can be downloaded from the website and run locally.

  15. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    Science.gov (United States)

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Can theories of visual representation help to explain asymmetries in amygdala function?

    OpenAIRE

    McMenamin, Brenton W.; Marsolek, Chad J.

    2013-01-01

    Emotional processing differs between the left and right hemispheres of the brain, and functional differences have been reported more specifically between the left amygdala and right amygdala, subcortical structures heavily implicated in emotional processing. However, the empirical pattern of amygdalar asymmetries is inconsistent with extant theories of emotional asymmetries. Here we review this discrepancy, and we hypothesize that hemispheric differences in visual object processing help to ex...

  17. Associative visual agnosia: a case study.

    Science.gov (United States)

    Charnallet, A; Carbonnel, S; David, D; Moreaud, O

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study, an alternative account in the framework of (non abstractive) episodic models of memory.

  18. Individual variation in the propensity for prospective thought is associated with functional integration between visual and retrosplenial cortex.

    Science.gov (United States)

    Villena-Gonzalez, Mario; Wang, Hao-Ting; Sormaz, Mladen; Mollo, Giovanna; Margulies, Daniel S; Jefferies, Elizabeth A; Smallwood, Jonathan

    2018-02-01

    It is well recognized that the default mode network (DMN) is involved in states of imagination, although the cognitive processes that this association reflects are not well understood. The DMN includes many regions that function as cortical "hubs", including the posterior cingulate/retrosplenial cortex, anterior temporal lobe and the hippocampus. This suggests that the role of the DMN in cognition may reflect a process of cortical integration. In the current study we tested whether functional connectivity from uni-modal regions of cortex into the DMN is linked to features of imaginative thought. We found that strong intrinsic communication between visual and retrosplenial cortex was correlated with the degree of social thoughts about the future. Using an independent dataset, we show that the same region of retrosplenial cortex is functionally coupled to regions of primary visual cortex as well as core regions that make up the DMN. Finally, we compared the functional connectivity of the retrosplenial cortex, with a region of medial prefrontal cortex implicated in the integration of information from regions of the temporal lobe associated with future thought in a prior study. This analysis shows that the retrosplenial cortex is preferentially coupled to medial occipital, temporal lobe regions and the angular gyrus, areas linked to episodic memory, scene construction and navigation. In contrast, the medial prefrontal cortex shows preferential connectivity with motor cortex and lateral temporal and prefrontal regions implicated in language, motor processes and working memory. Together these findings suggest that integrating neural information from visual cortex into retrosplenial cortex may be important for imagining the future and may do so by creating a mental scene in which prospective simulations play out. We speculate that the role of the DMN in imagination may emerge from its capacity to bind together distributed representations from across the cortex in a

  19. Visual functions and disability in diabetic retinopathy patients

    Directory of Open Access Journals (Sweden)

    Gauri Shankar Shrestha

    2014-01-01

    Conclusion: Impairment of near visual acuity, contrast sensitivity, and peripheral visual field correlated significantly with different types of visual disability. Hence, these clinical tests should be an integral part of the visual assessment of diabetic eyes.

  20. Ultrascale Visualization of Climate Data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Doutriaux, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pugmire, Dave [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Steed, Chad A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Childs, Hank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Krishnan, Harinarayan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Silva, Claudio T. [New York University, New York, NY (United States). Center for Urban Sciences; Santos, Emanuele [Universidade Federal do Ceara, Ceara (Brazil); Koop, David [New York University, New York, NY (United States); Ellqvist, Tommy [New York University, New York, NY (United States); Poco, Jorge [Polytechnic Institute of New York University, New York, NY (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Pletzer, Alexander [Tech-X Corporation, Boulder, CO (United States); Kindig, Dave [Tech-X Corporation, Boulder, CO (United States); Potter, Gerald [National Aeronautics and Space Administration (NASA), Washington, DC (United States); Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA), Washington, DC (United States)

    2013-09-01

    To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.

  1. How important is lateral masking in visual search?

    NARCIS (Netherlands)

    Wertheim, AH; Hooge, ITC; Krikke, K; Johnson, A

    Five experiments are presented, providing empirical support of the hypothesis that the sensory phenomenon of lateral masking may explain many well-known visual search phenomena that are commonly assumed to be governed by cognitive attentional mechanisms. Experiment I showed that when the same visual

  2. Visual-haptic integration with pliers and tongs: signal ‘weights’ take account of changes in haptic sensitivity caused by different tools

    Directory of Open Access Journals (Sweden)

    Chie eTakahashi

    2014-02-01

    Full Text Available When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the ‘weight’ given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots with different ‘gains’ between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber’s law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modelled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimising the

  3. Sensory processing patterns predict the integration of information held in visual working memory.

    Science.gov (United States)

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Object-based attention benefits reveal selective abnormalities of visual integration in autism.

    Science.gov (United States)

    Falter, Christine M; Grant, Kate C Plaisted; Davis, Greg

    2010-06-01

    A pervasive integration deficit could provide a powerful and elegant account of cognitive processing in autism spectrum disorders (ASD). However, in the case of visual Gestalt grouping, typically assessed by tasks that require participants explicitly to introspect on their own grouping perception, clear evidence for such a deficit remains elusive. To resolve this issue, we adopt an index of Gestalt grouping from the object-based attention literature that does not require participants to assess their own grouping perception. Children with ASD and mental- and chronological-age matched typically developing children (TD) performed speeded orientation discriminations of two diagonal lines. The lines were superimposed on circles that were either grouped together or segmented on the basis of color, proximity or these two dimensions in competition. The magnitude of performance benefits evident for grouped circles, relative to ungrouped circles, provided an index of grouping under various conditions. Children with ASD showed comparable grouping by proximity to the TD group, but reduced grouping by similarity. ASD seems characterized by a selective bias away from grouping by similarity combined with typical levels of grouping by proximity, rather than by a pervasive integration deficit.

  5. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    Science.gov (United States)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct

  6. Multiscale sampling model for motion integration.

    Science.gov (United States)

    Sherbakov, Lena; Yazdanbakhsh, Arash

    2013-09-30

    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.

  7. Visual perceptual abilities of Chinese-speaking and English-speaking children.

    Science.gov (United States)

    Lai, Mun Yee; Leung, Frederick Koon Shing

    2012-04-01

    This paper reports an investigation of Chinese-speaking and English-speaking children's general visual perceptual abilities. The Developmental Test of Visual Perception was administered to 41 native Chinese-speaking children of mean age 5 yr. 4 mo. in Hong Kong and 35 English-speaking children of mean age 5 yr. 2 mo. in Melbourne. Of interest were the two interrelated components of visual perceptual abilities, namely, motor-reduced visual perceptual and visual-motor integration perceptual abilities, which require either verbal or motoric responses in completing visual tasks. Chinese-speaking children significantly outperformed the English-speaking children on general visual perceptual abilities. When comparing the results of each of the two different components, the Chinese-speaking students' performance on visual-motor integration was far better than that of their counterparts (ES = 2.70), while the two groups of students performed similarly on motor-reduced visual perceptual abilities. Cultural factors such as written language format may be contributing to the enhanced performance of Chinese-speaking children's visual-motor integration abilities, but there may be validity questions in the Chinese version.

  8. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  9. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  10. Visualization of vessel traffic

    NARCIS (Netherlands)

    Willems, C.M.E.

    2011-01-01

    Moving objects are captured in multivariate trajectories, often large data with multiple attributes. We focus on vessel traffic as a source of such data. Patterns appearing from visually analyzing attributes are used to explain why certain movements have occurred. In this research, we have developed

  11. Associative Visual Agnosia: A Case Study

    Directory of Open Access Journals (Sweden)

    A. Charnallet

    2008-01-01

    Full Text Available We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study [1], an alternative account in the framework of (non abstractive episodic models of memory [4].

  12. Associative Visual Agnosia: A Case Study

    OpenAIRE

    Charnallet, A.; Carbonnel, S.; David, D.; Moreaud, O.

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study [1], an alternative account in the framework of (non abstractive) episodic models of memory [4].

  13. Attention and multisensory integration of emotions in schizophrenia

    Directory of Open Access Journals (Sweden)

    Mikhail eZvyagintsev

    2013-10-01

    Full Text Available The impairment of multisensory integration in schizophrenia is often explained by deficits of attentional selection. Emotion perception, however, does not always depend on attention because affective stimuli can capture attention automatically. In our study, we specify the role of attention in the multisensory perception of emotional stimuli in schizophrenia. We evaluated attention by interference between conflicting auditory and visual information in two multisensory paradigms in patients with schizophrenia and healthy participants. In the first paradigm, interference occurred between physical features of the dynamic auditory and visual stimuli. In the second paradigm, interference occurred between the emotional content of the auditory and visual stimuli, namely fearful and sad emotions. In patients with schizophrenia, the interference effect was observed in both paradigms. In contrast, in healthy participants, the interference occurred in the emotional paradigm only. These findings indicate that the information leakage between different modalities in patients with schizophrenia occurs at the perceptual level, which is intact in healthy participants. However, healthy participants can have problems with the separation of fearful and sad emotions similar to those of patients with schizophrenia.

  14. The role of pulvinar in the transmission of information in the visual hierarchy.

    Science.gov (United States)

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure

  15. Inner resources for survival: integrating interpersonal psychotherapy with spiritual visualization with homeless youth.

    Science.gov (United States)

    Mastropieri, Biagio; Schussel, Lorne; Forbes, David; Miller, Lisa

    2015-06-01

    Homeless youth have particular need to develop inner resources to confront the stress, abusive environment of street life, and the paucity of external resources. Research suggests that treatment supporting spiritual awareness and growth may create a foundation for coping, relationships, and negotiating styles to mitigate distress. The current pilot study tests the feasibility, acceptability, and helpfulness of an interpersonal spiritual group psychotherapy, interpersonal psychotherapy (IPT) integrated with spiritual visualization (SV), offered through a homeless shelter, toward improving interpersonal coping and ameliorating symptoms of depression, distress, and anxiety in homeless youth. An exploratory pilot of integrative group psychotherapy (IPT + SV) for homeless young adults was conducted in a New York City on the residential floor of a shelter-based transitional living program. Thirteen young adult men (mean age 20.3 years, SD = 1.06) participated in a weekly evening psychotherapy group (55 % African-American, 18 % biracial, 18 % Hispanic, 9 % Caucasian). Measures of psychological functioning were assessed at pre-intervention and post-intervention using the General Health Questionnaire (GHQ-12), Patient Health Questionnaire (PHQ-9, GAD-7), and the Inventory of Interpersonal Problems (IIP-32). A semi-structured exit interview and a treatment satisfaction questionnaire were also employed to assess acceptability following treatment. Among homeless young adults to participate in the group treatment, significant decreases in symptoms of general distress and depression were found between baseline and termination of treatment, and at the level of a trend, improvement in overall interpersonal functioning and levels of general anxiety. High utilization and treatment satisfaction showed the intervention to be both feasible and acceptable. Offered as an adjunct to the services-as-usual model at homeless shelters serving young adults, interpersonal psychotherapy

  16. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  17. Visual-Motor Integration in Children With Mild Intellectual Disability: A Meta-Analysis.

    Science.gov (United States)

    Memisevic, Haris; Djordjevic, Mirjana

    2018-01-01

    Visual-motor integration (VMI) skills, defined as the coordination of fine motor and visual perceptual abilities, are a very good indicator of a child's overall level of functioning. Research has clearly established that children with intellectual disability (ID) have deficits in VMI skills. This article presents a meta-analytic review of 10 research studies involving 652 children with mild ID for which a VMI skills assessment was also available. We measured the standardized mean difference (Hedges' g) between scores on VMI tests of these children with mild ID and either typically developing children's VMI test scores in these studies or normative mean values on VMI tests used by the studies. While mild ID is defined in part by intelligence scores that are two to three standard deviations below those of typically developing children, the standardized mean difference of VMI differences between typically developing children and children with mild ID in this meta-analysis was 1.75 (95% CI [1.11, 2.38]). Thus, the intellectual and adaptive skill deficits of children with mild ID may be greater (perhaps especially due to their abstract and conceptual reasoning deficits) than their relative VMI deficits. We discuss the possible meaning of this relative VMI strength among children with mild ID and suggest that their stronger VMI skills may be a target for intensive academic interventions as a means of attenuating problems in adaptive functioning.

  18. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  19. Visual Literacy in Bloom: Using Bloom's Taxonomy to Support Visual Learning Skills

    Science.gov (United States)

    Arneson, Jessie B.; Offerdahl, Erika G.

    2018-01-01

    "Vision and Change" identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of…

  20. Reciprocal Engagement Between a Scientist and Visual Displays

    Science.gov (United States)

    Nolasco, Michelle Maria

    In this study the focus of investigation was the reciprocal engagement between a professional scientist and the visual displays with which he interacted. Visual displays are considered inextricable from everyday scientific endeavors and their interpretation requires a "back-and-forthness" between the viewers and the objects being viewed. The query that drove this study was: How does a scientist engage with visual displays during the explanation of his understanding of extremely small biological objects? The conceptual framework was based in embodiment where the scientist's talk, gesture, and body position were observed and microanalyzed. The data consisted of open-ended interviews that positioned the scientist to interact with visual displays when he explained the structure and function of different sub-cellular features. Upon microanalyzing the scientist's talk, gesture, and body position during his interactions with two different visual displays, four themes were uncovered: Naming, Layering, Categorizing, and Scaling . Naming occurred when the scientist added markings to a pre-existing, hand-drawn visual display. The markings had meaning as stand-alone label and iconic symbols. Also, the markings transformed the pre-existing visual display, which resulted in its function as a new visual object. Layering occurred when the scientist gestured over images so that his gestures aligned with one or more of the image's features, but did not touch the actual visual display. Categorizing occurred when the scientist used contrasting categories, e.g. straight vs. not straight, to explain his understanding about different characteristics that the small biological objects held. Scaling occurred when the scientist used gesture to resize an image's features so that they fit his bodily scale. Three main points were drawn from this study. First, the scientist employed a variety of embodied strategies—coordinated talk, gesture, and body position—when he explained the structure

  1. SEURAT: visual analytics for the integrated analysis of microarray data.

    Science.gov (United States)

    Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony

    2010-06-03

    In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.

  2. SEURAT: Visual analytics for the integrated analysis of microarray data

    Directory of Open Access Journals (Sweden)

    Bullinger Lars

    2010-06-01

    Full Text Available Abstract Background In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. Results We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. Conclusions The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.

  3. We have yet to see the "visual argument"

    NARCIS (Netherlands)

    Popa, O.E.

    2016-01-01

    In this paper, I defend two skeptical claims regarding current research on visual arguments and I explain how these claims reflect upon past and future research. The first claim is that qualifying an argument as being visual amounts to a category mistake; the second claim is that past analyses of

  4. Storytelling and Visualization: An Extended Survey

    OpenAIRE

    Chao Tong; Richard Roberts; Rita Borgo; Sean Walton; Robert S. Laramee; Kodzo Wegba; Aidong Lu; Yun Wang; Huamin Qu; Qiong Luo; Xiaojuan Ma

    2018-01-01

    Throughout history, storytelling has been an effective way of conveying information and knowledge. In the field of visualization, storytelling is rapidly gaining momentum and evolving cutting-edge techniques that enhance understanding. Many communities have commented on the importance of storytelling in data visualization. Storytellers tend to be integrating complex visualizations into their narratives in growing numbers. In this paper, we present a survey of storytelling literature in visual...

  5. Short and Long-Term Attentional Firing Rates Can Be Explained by ST-Neuron Dynamics

    Directory of Open Access Journals (Sweden)

    Oscar J. Avella Gonzalez

    2018-03-01

    Full Text Available Attention modulates neural selectivity and optimizes the allocation of cortical resources during visual tasks. A large number of experimental studies in primates and humans provide ample evidence. As an underlying principle of visual attention, some theoretical models suggested the existence of a gain element that enhances contrast of the attended stimuli. In contrast, the Selective Tuning model of attention (ST proposes an attentional mechanism based on suppression of irrelevant signals. In this paper, we present an updated characterization of the ST-neuron proposed by the Selective Tuning model, and suggest that the inclusion of adaptation currents (Ih to ST-neurons may explain the temporal profiles of the firing rates recorded in single V4 cells during attentional tasks. Furthermore, using the model we show that the interaction between stimulus-selectivity of a neuron and attention shapes the profile of the firing rate, and is enough to explain its fast modulation and other discontinuities observed, when the neuron responds to a sudden switch of stimulus, or when one stimulus is added to another during a visual task.

  6. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  7. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state

    Energy Technology Data Exchange (ETDEWEB)

    Baskan, O.; Clercx, H. J. H [Fluid Dynamics Laboratory, Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Speetjens, M. F. M. [Energy Technology Laboratory, Department of Mechanical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Metcalfe, G. [Commonwealth Scientific and Industrial Research Organisation, Melbourne, Victoria 3190 (Australia); Swinburne University of Technology, Department of Mechanical Engineering, Hawthorn VIC 3122 (Australia)

    2015-10-15

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.

  8. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state.

    Science.gov (United States)

    Baskan, O; Speetjens, M F M; Metcalfe, G; Clercx, H J H

    2015-10-01

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.

  9. Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.

    Science.gov (United States)

    Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y

    2018-04-01

    We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.

  10. Visual Arts as a Tool for Phenomenology

    Directory of Open Access Journals (Sweden)

    Anna S. CohenMiller

    2017-12-01

    Full Text Available In this article I explain the process and benefits of using visual arts as a tool within a transcendental phenomenological study. I present and discuss drawings created and described by four participants over the course of twelve interviews. Findings suggest the utility of visual arts methods within the phenomenological toolset to encourage participant voice through easing communication and facilitating understanding.

  11. Orientation is different: Interaction between contour integration and feature contrasts in visual search.

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei; Zhaoping, Li

    2013-09-10

    Salient items usually capture attention and are beneficial to visual search. Jingling and Tseng (2013), nevertheless, have discovered that a salient collinear column can impair local visual search. The display used in that study had 21 rows and 27 columns of bars, all uniformly horizontal (or vertical) except for one column of bars orthogonally oriented to all other bars, making this unique column of collinear (or noncollinear) bars salient in the display. Observers discriminated an oblique target bar superimposed on one of the bars either in the salient column or in the background. Interestingly, responses were slower for a target in a salient collinear column than in the background. This opens a theoretical question of how contour integration interacts with salience computation, which is addressed here by an examination of how salience modulated the search impairment from the collinear column. We show that the collinear column needs to have a high orientation contrast with its neighbors to exert search interference. A collinear column of high contrast in color or luminance did not produce the same impairment. Our results show that orientation-defined salience interacted with collinear contour differently from other feature dimensions, which is consistent with the neuronal properties in V1.

  12. Integration of Distinct Objects in Visual Working Memory Depends on Strong Objecthood Cues Even for Different-Dimension Conjunctions.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-05-01

    What makes an integrated object in visual working memory (WM)? Past evidence suggested that WM holds all features of multidimensional objects together, but struggles to integrate color-color conjunctions. This difficulty was previously attributed to a challenge in same-dimension integration, but here we argue that it arises from the integration of 2 distinct objects. To test this, we examined the integration of distinct different-dimension features (a colored square and a tilted bar). We monitored the contralateral delay activity, an event-related potential component sensitive to the number of objects in WM. The results indicated that color and orientation belonging to distinct objects in a shared location were not integrated in WM (Experiment 1), even following a common fate Gestalt cue (Experiment 2). These conjunctions were better integrated in a less demanding task (Experiment 3), and in the original WM task, but with a less individuating version of the original stimuli (Experiment 4). Our results identify the critical factor in WM integration at same- versus separate-objects, rather than at same- versus different-dimensions. Compared with the perfect integration of an object's features, the integration of several objects is demanding, and depends on an interaction between the grouping cues and task demands, among other factors. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  14. We have yet to see the "visual argument"

    OpenAIRE

    Popa, O.E.

    2016-01-01

    In this paper, I defend two skeptical claims regarding current research on visual arguments and I explain how these claims reflect upon past and future research. The first claim is that qualifying an argument as being visual amounts to a category mistake; the second claim is that past analyses of visual arguments fault on both end of the “production line” in that the input is not visual and the output is not an argument. Based on the developed critique, I discuss how the study of images in co...

  15. AppEEARS: A Simple Tool that Eases Complex Data Integration and Visualization Challenges for Users

    Science.gov (United States)

    Maiersperger, T.

    2017-12-01

    The Application for Extracting and Exploring Analysis-Ready Samples (AppEEARS) offers a simple and efficient way to perform discovery, processing, visualization, and acquisition across large quantities and varieties of Earth science data. AppEEARS brings significant value to a very broad array of user communities by 1) significantly reducing data volumes, at-archive, based on user-defined space-time-variable subsets, 2) promoting interoperability across a wide variety of datasets via format and coordinate reference system harmonization, 3) increasing the velocity of both data analysis and insight by providing analysis-ready data packages and by allowing interactive visual exploration of those packages, and 4) ensuring veracity by making data quality measures more apparent and usable and by providing standards-based metadata and processing provenance. Development and operation of AppEEARS is led by the National Aeronautics and Space Administration (NASA) Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC also partners with several other archives to extend the capability across a larger federation of geospatial data providers. Over one hundred datasets are currently available, covering a diversity of variables including land cover, population, elevation, vegetation indices, and land surface temperature. Many hundreds of users have already used this new web-based capability to make the complex tasks of data integration and visualization much simpler and more efficient.

  16. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences

    Science.gov (United States)

    Rudd, Michael E.

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253

  17. A Cortical Edge-integration Model of Object-Based Lightness Computation that Explains Effects of Spatial Context and Individual Differences

    Directory of Open Access Journals (Sweden)

    Michael E Rudd

    2014-08-01

    Full Text Available Previous work demonstrated that perceived surface reflectance (lightness can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatial integrates these steps along paths through the image to compute lightness (Rudd & Zemach, 2004, 2005, 2007. This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013 suggests that the human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010 further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer’s interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd & Zemach, 2005. Here, I show how the separate influences of grouping and attention on lightness can be together modeled by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013, and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.

  18. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences.

    Science.gov (United States)

    Rudd, Michael E

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.

  19. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    Science.gov (United States)

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  20. Two items remembered as precisely as one: how integral features can improve visual working memory.

    Science.gov (United States)

    Bae, Gi Yeul; Flombaum, Jonathan I

    2013-10-01

    In the ongoing debate about the efficacy of visual working memory for more than three items, a consensus has emerged that memory precision declines as memory load increases from one to three. Many studies have reported that memory precision seems to be worse for two items than for one. We argue that memory for two items appears less precise than that for one only because two items present observers with a correspondence challenge that does not arise when only one item is stored--the need to relate observations to their corresponding memory representations. In three experiments, we prevented correspondence errors in two-item trials by varying sample items along task-irrelevant but integral (as opposed to separable) dimensions. (Initial experiments with a classic sorting paradigm identified integral feature relationships.) In three memory experiments, our manipulation produced equally precise representations of two items and of one item.

  1. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  2. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  3. Pedagogy and Quality in Indian Slum School Settings: A Bernsteinian Analysis of Visual Representations in the Integrated Child Development Service

    Science.gov (United States)

    Chawla-Duggan, Rita

    2016-01-01

    This paper focuses upon the micro level of the pre-school classroom, taking the example of the Indian Integrated Child Development Service (ICDS), and the discourse of "child-centred" pedagogy that is often associated with quality pre-schooling. Through an analysis of visual data, semi-structured and film elicitation interviews drawn…

  4. The Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT): Data Analysis and Visualization for Geoscience Data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Doutriaux, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Ross [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Steed, Chad [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Krishnan, Harinarayan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Silva, Claudio [NYU Polytechnic School of Engineering, New York, NY (United States); Chaudhary, Aashish [Kitware, Inc., Clifton Park, NY (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Childs, Hank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Mr. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Bauer, Andrew [Kitware, Inc., Clifton Park, NY (United States); Pletzer, Alexander [Tech-X Corp., Boulder, CO (United States); Poco, Jorge [NYU Polytechnic School of Engineering, New York, NY (United States); Ellqvist, Tommy [NYU Polytechnic School of Engineering, New York, NY (United States); Santos, Emanuele [Federal Univ. of Ceara, Fortaleza (Brazil); Potter, Gerald [NASA Johnson Space Center, Houston, TX (United States); Smith, Brian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Maxwell, Thomas [NASA Johnson Space Center, Houston, TX (United States); Kindig, David [Tech-X Corp., Boulder, CO (United States); Koop, David [NYU Polytechnic School of Engineering, New York, NY (United States)

    2013-05-01

    To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.

  5. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 2 : knowledge modeling and database development.

    Science.gov (United States)

    2009-12-01

    The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge Engineers at the : state and local level from several aspects that were documented in Volume One, Summary Report. The followi...

  6. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  7. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability

    Energy Technology Data Exchange (ETDEWEB)

    Rengier, Fabian, E-mail: fabian.rengier@web.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Häfner, Matthias F. [University Hospital Heidelberg, Department of Radiation Oncology, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Unterhinninghofen, Roland [Karlsruhe Institute of Technology (KIT), Institute for Anthropomatics, Department of Informatics, Adenauerring 2, 76131 Karlsruhe (Germany); Nawrotzki, Ralph; Kirsch, Joachim [University of Heidelberg, Institute of Anatomy and Cell Biology, Im Neuenheimer Feld 307, 69120 Heidelberg (Germany); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Giesel, Frederik L. [University of Heidelberg, Institute of Anatomy and Cell Biology, Im Neuenheimer Feld 307, 69120 Heidelberg (Germany); University Hospital Heidelberg, Department of Nuclear Medicine, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany)

    2013-08-15

    Purpose: Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students’ deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. Materials and methods: A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. Results: The total number of correctly answered questions improved from 36.9 ± 4.8 to 49.5 ± 5.4 (p < 0.001) which corresponded to a mean improvement of 12.6 (95% confidence interval 9.9–15.3) or 19.8%. Radiological knowledge improved by 36.0% (p < 0.001), diagnostic skills for cross-sectional imaging by 38.7% (p < 0.001), diagnostic skills for other imaging modalities – which were not included in the course – by 14.0% (p = 0.001), and visual-spatial ability by 11.3% (p < 0.001). Conclusion: The integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby

  8. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability

    International Nuclear Information System (INIS)

    Rengier, Fabian; Häfner, Matthias F.; Unterhinninghofen, Roland; Nawrotzki, Ralph; Kirsch, Joachim; Kauczor, Hans-Ulrich; Giesel, Frederik L.

    2013-01-01

    Purpose: Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students’ deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. Materials and methods: A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. Results: The total number of correctly answered questions improved from 36.9 ± 4.8 to 49.5 ± 5.4 (p < 0.001) which corresponded to a mean improvement of 12.6 (95% confidence interval 9.9–15.3) or 19.8%. Radiological knowledge improved by 36.0% (p < 0.001), diagnostic skills for cross-sectional imaging by 38.7% (p < 0.001), diagnostic skills for other imaging modalities – which were not included in the course – by 14.0% (p = 0.001), and visual-spatial ability by 11.3% (p < 0.001). Conclusion: The integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby

  9. Learning sorting algorithms through visualization construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  10. Impaired Visual Motor Coordination in Obese Adults.

    LENUS (Irish Health Repository)

    Gaul, David

    2016-09-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly (p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability (p < 0.05), and a larger amplitude (p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  11. An Integrated Model to Explain Inter-Relationships in Travel ...

    African Journals Online (AJOL)

    This study focuses on the decision making process of international tourists traveling to Tanzania. An integrated approach is proposed to understand the interrelationships among tourist motivations, expectations, place identity and place dependence. Specifically, travel motivations directly affect tourist's expectations and ...

  12. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    Science.gov (United States)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  13. Perceptual integration without conscious access

    NARCIS (Netherlands)

    Fahrenfort, Johannes J.; Van Leeuwen, Jonathan; Olivers, Christian N.L.; Hogendoorn, Hinze

    2017-01-01

    The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is

  14. The effect of early visual deprivation on the neural bases of multisensory processing

    OpenAIRE

    Guerreiro, Maria J. S.; Putzar, Lisa; Röder, Brigitte

    2015-01-01

    Animal studies have shown that congenital visual deprivation reduces the ability of neurons to integrate cross-modal inputs. Guerreiro et al. reveal that human patients who suffer transient congenital visual deprivation because of cataracts lack multisensory integration in auditory and multisensory areas as adults, and suppress visual processing during audio-visual stimulation.

  15. Early vision and visual attention

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije P.

    2003-01-01

    Full Text Available The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT, based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constructs of FIT, like types of visual search and the role of the attention. The following review describes the main studies of early vision and visual attention.

  16. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    Science.gov (United States)

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  17. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Promoting Healthful Exercise for Visually Impaired Persons with Diabetes.

    Science.gov (United States)

    Weitzman, D. M.

    1993-01-01

    This article discusses the importance of exercise for many people with visual impairments and diabetes. It lists precautions for the person with visual impairments and diabetes and specifies who should not exercise, explains "diabetes-specific" benefits of exercise, suggests a format for a safe workout, and includes an example of a successful…

  19. Visual dictionaries as intermediate features in the human brain

    Directory of Open Access Journals (Sweden)

    Kandan eRamakrishnan

    2015-01-01

    Full Text Available The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2 and V3. However BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

  20. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  1. Evidence for optimal integration of visual feature representations across saccades

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  2. Integrating land cover and terrain characteristics to explain plague ...

    African Journals Online (AJOL)

    Literature suggests that higher resolution remote sensing data integrated in Geographic Information System (GIS) can provide greater possibility to refine the analysis of land cover and terrain characteristics for explanation of abundance and distribution of plague hosts and vectors and hence of health risk hazards to ...

  3. Local and global limits on visual processing in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Marc S Tibber

    Full Text Available Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection. It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female and 20 age-matched control participants (mean age 39±9 years; 3 female performed a motion coherence task and three equivalent noise (averaging tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs, with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA. These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information.

  4. Structural Model of the Relationships among Cognitive Processes, Visual Motor Integration, and Academic Achievement in Students with Mild Intellectual Disability (MID)

    Science.gov (United States)

    Taha, Mohamed Mostafa

    2016-01-01

    This study aimed to test a proposed structural model of the relationships and existing paths among cognitive processes (attention and planning), visual motor integration, and academic achievement in reading, writing, and mathematics. The study sample consisted of 50 students with mild intellectual disability or MID. The average age of these…

  5. Modelling individual difference in visual categorization.

    Science.gov (United States)

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  6. "Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology".

    Science.gov (United States)

    Lin, Ying Ling; Guerguerian, Anne-Marie; Tomasi, Jessica; Laussen, Peter; Trbovich, Patricia

    2017-08-14

    Intensive care clinicians use several sources of data in order to inform decision-making. We set out to evaluate a new interactive data integration platform called T3™ made available for pediatric intensive care. Three primary functions are supported: tracking of physiologic signals, displaying trajectory, and triggering decisions, by highlighting data or estimating risk of patient instability. We designed a human factors study to identify interface usability issues, to measure ease of use, and to describe interface features that may enable or hinder clinical tasks. Twenty-two participants, consisting of bedside intensive care physicians, nurses, and respiratory therapists, tested the T3™ interface in a simulation laboratory setting. Twenty tasks were performed with a true-to-setting, fully functional, prototype, populated with physiological and therapeutic intervention patient data. Primary data visualization was time series and secondary visualizations were: 1) shading out-of-target values, 2) mini-trends with exaggerated maxima and minima (sparklines), and 3) bar graph of a 16-parameter indicator. Task completion was video recorded and assessed using a use error rating scale. Usability issues were classified in the context of task and type of clinician. A severity rating scale was used to rate potential clinical impact of usability issues. Time series supported tracking a single parameter but partially supported determining patient trajectory using multiple parameters. Visual pattern overload was observed with multiple parameter data streams. Automated data processing using shading and sparklines was often ignored but the 16-parameter data reduction algorithm, displayed as a persistent bar graph, was visually intuitive. However, by selecting or automatically processing data, triggering aids distorted the raw data that clinicians use regularly. Consequently, clinicians could not rely on new data representations because they did not know how they were

  7. Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data.

    Science.gov (United States)

    Detwiler, Landon T; Suciu, Dan; Franklin, Joshua D; Moore, Eider B; Poliakov, Andrew V; Lee, Eunjung S; Corina, David P; Ojemann, George A; Brinkley, James F

    2009-01-01

    This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too "heavyweight" for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a "lightweight" distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts.

  8. Distributed XQuery-based integration and visualization of multimodality brain mapping data

    Directory of Open Access Journals (Sweden)

    Landon T Detwiler

    2009-01-01

    Full Text Available This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts.

  9. Visual motion perception predicts driving hazard perception ability.

    Science.gov (United States)

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  10. Spatial integration and cortical dynamics.

    OpenAIRE

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-01

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells wi...

  11. The picture superiority effect in categorization: visual or semantic?

    Science.gov (United States)

    Job, R; Rumiati, R; Lotto, L

    1992-09-01

    Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.

  12. Functional MRI of the visual cortex and visual testing in patients with previous optic neuritis

    DEFF Research Database (Denmark)

    Langkilde, Annika Reynberg; Frederiksen, J.L.; Rostrup, Egill

    2002-01-01

    to both the results of the contrast sensitivity test and to the Snellen visual acuity. Our results indicate that fMRI is a useful method for the study of ON, even in cases where the visual acuity is severely impaired. The reduction in activated volume could be explained as a reduced neuronal input......The volume of cortical activation as detected by functional magnetic resonance imaging (fMRI) in the visual cortex has previously been shown to be reduced following optic neuritis (ON). In order to understand the cause of this change, we studied the cortical activation, both the size...... of the activated area and the signal change following ON, and compared the results with results of neuroophthalmological testing. We studied nine patients with previous acute ON and 10 healthy persons served as controls using fMRI with visual stimulation. In addition to a reduced activated volume, patients showed...

  13. Visual and kinesthetic locomotor imagery training integrated with auditory step rhythm for walking performance of patients with chronic stroke.

    Science.gov (United States)

    Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk

    2011-02-01

    To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.

  14. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions.

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2016-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional

  15. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2017-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional

  16. Visualization of uncertainty and ensemble data: Exploration of climate modeling and weather forecast data with integrated ViSUS-CDAT systems

    International Nuclear Information System (INIS)

    Potter, Kristin; Pascucci, Valerio; Johhson, Chris; Wilson, Andrew; Bremer, Peer-Timo; Williams, Dean; Doutriaux, Charles

    2009-01-01

    Climate scientists and meteorologists are working towards a better understanding of atmospheric conditions and global climate change. To explore the relationships present in numerical predictions of the atmosphere, ensemble datasets are produced that combine time- and spatially-varying simulations generated using multiple numeric models, sampled input conditions, and perturbed parameters. These data sets mitigate as well as describe the uncertainty present in the data by providing insight into the effects of parameter perturbation, sensitivity to initial conditions, and inconsistencies in model outcomes. As such, massive amounts of data are produced, creating challenges both in data analysis and in visualization. This work presents an approach to understanding ensembles by using a collection of statistical descriptors to summarize the data, and displaying these descriptors using variety of visualization techniques which are familiar to domain experts. The resulting techniques are integrated into the ViSUS/Climate Data and Analysis Tools (CDAT) system designed to provide a directly accessible, complex visualization framework to atmospheric researchers.

  17. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 1 : outreach and commercialization of IRSV prototype.

    Science.gov (United States)

    2012-03-01

    The Integrated Remote Sensing and Visualization System (IRSV) was developed in Phase One of this project in order to : accommodate the needs of todays Bridge Engineers at the state and local level. Overall goals of this project are: : Better u...

  18. [Intraoperative multidimensional visualization].

    Science.gov (United States)

    Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M

    2016-12-01

    Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.

  19. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  20. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  1. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis

    OpenAIRE

    Chen, C.; Schneps, M.; Masyn, K.; Thomson, J.

    2016-01-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension whil...

  2. Computer-Based Tutoring of Visual Concepts: From Novice to Experts.

    Science.gov (United States)

    Sharples, Mike

    1991-01-01

    Description of ways in which computers might be used to teach visual concepts discusses hypermedia systems; describes computer-generated tutorials; explains the use of computers to create learning aids such as concept maps, feature spaces, and structural models; and gives examples of visual concept teaching in medical education. (10 references)…

  3. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  4. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  5. Psychological Adjustment and Levels of Self Esteem in Children with Visual-Motor Integration Difficulties Influences the Results of a Randomized Intervention Trial

    Science.gov (United States)

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z.

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low…

  6. Visual effects and rehabilitation after stroke

    Directory of Open Access Journals (Sweden)

    Fiona Rowe

    2017-03-01

    Full Text Available Strokes, or cerebrovascular accidents (CVA are common, particularly in older people. The problems of motor function and speech are well known. This article explains the common visual problems which can occur with a stroke and gives information about diagnosis and management.

  7. Effects of prey abundance, distribution, visual contrast and morphology on selection by a pelagic piscivore

    Science.gov (United States)

    Hansen, Adam G.; Beauchamp, David A.

    2014-01-01

    Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection

  8. Brain activity patterns uniquely supporting visual feature integration after traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Anjali eRaja Beharelle

    2011-12-01

    Full Text Available Traumatic brain injury (TBI patients typically respond more slowly and with more variability than controls during tasks of attention requiring speeded reaction time. These behavioral changes are attributable, at least in part, to diffuse axonal injury (DAI, which affects integrated processing in distributed systems. Here we use a multivariate method sensitive to distributed neural activity to compare brain activity patterns of patients with chronic phase moderate-to-severe TBI to those of controls during performance on a visual feature-integration task assessing complex attentional processes that has previously shown sensitivity to TBI. The TBI patients were carefully screened to be free of large focal lesions that can affect performance and brain activation independently of DAI. The task required subjects to hold either one or three features of a target in mind while suppressing responses to distracting information. In controls, the multi-feature condition activated a distributed network including limbic, prefrontal, and medial temporal structures. TBI patients engaged this same network in the single-feature and baseline conditions. In multi-feature presentations, TBI patients alone activated additional frontal, parietal, and occipital regions. These results are consistent with neuroimaging studies using tasks assessing different cognitive domains, where increased spread of brain activity changes was associated with TBI. Our results also extend previous findings that brain activity for relatively moderate task demands in TBI patients is similar to that associated with of high task demands in controls.

  9. Ethnic differences in maternal dietary patterns are largely explained by socio-economic score and integration score: a population-based study.

    Science.gov (United States)

    Sommer, Christine; Sletner, Line; Jenum, Anne K; Mørkrid, Kjersti; Andersen, Lene F; Birkeland, Kåre I; Mosdøl, Annhild

    2013-01-01

    The impact of socio-economic position and integration level on the observed ethnic differences in dietary habits has received little attention. To identify and describe dietary patterns in a multi-ethnic population of pregnant women, to explore ethnic differences in odds ratio (OR) for belonging to a dietary pattern, when adjusted for socio-economic status and integration level and to examine whether the dietary patterns were reflected in levels of biomarkers related to obesity and hyperglycaemia. This cross-sectional study was a part of the STORK Groruddalen study. In total, 757 pregnant women, of whom 59% were of a non-Western origin, completed a food frequency questionnaire in gestational week 28±2. Dietary patterns were extracted through cluster analysis using Ward's method. Four robust clusters were identified where cluster 4 was considered the healthier dietary pattern and cluster 1 the least healthy. All non-European women as compared to Europeans had higher OR for belonging to the unhealthier dietary patterns 1-3 vs. cluster 4. Women from the Middle East and Africa had the highest OR, 21.5 (95% CI 10.6-43.7), of falling into cluster 1 vs. 4 as compared to Europeans. The ORs decreased substantially after adjusting for socio-economic score and integration score. A non-European ethnic origin, low socio-economic and integration scores, conduced higher OR for belonging to clusters 1, 2, and 3 as compared to cluster 4. Significant differences in fasting and 2-h glucose, fasting insulin, glycosylated haemoglobin (HbA1c), insulin resistance (HOMA-IR), and total cholesterol were observed across the dietary patterns. After adjusting for ethnicity, differences in fasting insulin (p=0.015) and HOMA-IR (p=0.040) across clusters remained significant, despite low power. The results indicate that socio-economic and integration level may explain a large proportion of the ethnic differences in dietary patterns.

  10. Visuals Matter! Designing and using effective visual representations to support project and portfolio decisions

    DEFF Research Database (Denmark)

    Geraldi, Joana; Arlt, Mario

    . They can help managers to be sharper and quicker, especially if visuals are used in a mindful manner. The intent of this book is to increase the awareness of project, program and portfolio practitioners and scholars about the importance of visuals and to provide practical recommendations on how they can......This book is the result of a two-year research project, funded by Project Management Institute and University College London on data visualization in the project and portfolio management contexts. Visuals are powerful and constitute an integral part of analyzing problems and making decisions...... be used and designed mindfully. The research, which underpins this book, focuses on the impact of visuals on cognition of data in project portfolio decisions. The complexity of portfolio problems often exceed human cognitive limitations as a result of a number of factors, such as the large number...

  11. A Visual Profile of Queensland Indigenous Children.

    Science.gov (United States)

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2016-03-01

    Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying

  12. Visual Representations of the Water Cycle in Science Textbooks

    Science.gov (United States)

    Vinisha, K.; Ramadas, J.

    2013-01-01

    Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…

  13. A visual description of 2-component spinor calculus

    International Nuclear Information System (INIS)

    Hellsten, H.

    1975-07-01

    Spinors and algebraic operations on them are given a visual description. This structural interpretation of spinors is to be contrasted with the well known quadratic relation between spinors and visual objects (vectors,flagpoles). The interpretation in the present paper is founded on the observation that the product between two successive rotations half a turn along the legs of an angle will be a rotation through twice that angle. This observation makes it possible to explain visually the doubling of angles, which occurs when vectors are constructed out of spinors. It is seen that, using this explanation, spinor calculus can, in close analogy to 3-dimensional Euclidean vector calculus, be given a purely visual meaning. (Auth.)

  14. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  15. Integration of biological networks and gene expression data using Cytoscape

    DEFF Research Database (Denmark)

    Cline, M.S.; Smoot, M.; Cerami, E.

    2007-01-01

    of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules......Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context...... and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape....

  16. IIS--Integrated Interactome System: a web-based platform for the annotation, analysis and visualization of protein-metabolite-gene-drug interactions by integrating a variety of data sources and tools.

    Science.gov (United States)

    Carazzolle, Marcelo Falsarella; de Carvalho, Lucas Miguel; Slepicka, Hugo Henrique; Vidal, Ramon Oliveira; Pereira, Gonçalo Amarante Guimarães; Kobarg, Jörg; Meirelles, Gabriela Vaz

    2014-01-01

    High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two

  17. Sledge-Hammer Integration

    Science.gov (United States)

    Ahner, Henry

    2009-01-01

    Integration (here visualized as a pounding process) is mathematically realized by simple transformations, successively smoothing the bounding curve into a straight line and the region-to-be-integrated into an area-equivalent rectangle. The relationship to Riemann sums, and to the trapezoid and midpoint methods of numerical integration, is…

  18. Bayesian integration of position and orientation cues in perception of biological and non-biological dynamic forms

    Directory of Open Access Journals (Sweden)

    Steven Matthew Thurman

    2014-02-01

    Full Text Available Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic

  19. Constituting fully integrated visual analysis system for Cu(II) on TiO₂/cellulose paper.

    Science.gov (United States)

    Li, Shun-Xing; Lin, Xiaofeng; Zheng, Feng-Ying; Liang, Wenjie; Zhong, Yanxue; Cai, Jiabai

    2014-07-15

    As a cheap and abundant porous material, cellulose filter paper was used to immobilize nano-TiO2 and denoted as TiO2/cellulose paper (TCP). With high adsorption capacity for Cu(II) (more than 1.65 mg), TCP was used as an adsorbent, photocatalyst, and colorimetric sensor at the same time. Under the optimum adsorption conditions, i.e., pH 6.5 and 25 °C, the adsorption ratio of Cu(II) was higher than 96.1%. Humic substances from the matrix could be enriched onto TCP but the interference of their colors on colorimetric detection could be eliminated by the photodegradation. In the presence of hydroxylamine, neocuproine, as a selective indicator, was added onto TCP, and a visual color change from white to orange was generated. The concentration of Cu(II) was quantified by the color intensity images using image processing software. This fully integrated visual analysis system was successfully applied for the detection of Cu(II) in 10.0 L of drinking water and seawater with a preconcentration factor of 10(4). The log-linear calibration curve for Cu(II) was in the range of 0.5-50.0 μg L(-1) with a determination coefficient (R(2)) of 0.985 and its detection limit was 0.073 μg L(-1).

  20. Introduction of computing in physics learning visual programing

    International Nuclear Information System (INIS)

    Kim, Cheung Seop

    1999-12-01

    This book introduces physics and programing, foundation of visual basic, grammar of visual basic, visual programing, solution of equation, calculation of matrix, solution of simultaneous equation, differentiation, differential equation, simultaneous differential equation and second-order differential equation, integration and solution of partial differential equation. It also covers basic language, terms of visual basic, usage of method, graphic method, step by step method, fails-position method, Gauss elimination method, difference method and Euler method.

  1. Visual Attention in Posterior Stroke and Relations to Alexia

    DEFF Research Database (Denmark)

    Petersen, Anders; Vangkilde, Signe; Fabricius, Charlotte

    2016-01-01

    that reduced visual speed and span may explain pure alexia. Eight patients with unilateral PCA strokes (four left hemisphere, four right hemisphere) were selected on the basis of lesion location, rather than the presence of any visual symptoms. Visual attention was characterized by a whole report paradigm......Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere, while attentional effects of more posterior lesions are less clear. Commonly, such deficits are investigated in relation to specific syndromes like visual...... agnosia or pure alexia. The aim of this study was to characterize visual processing speed and apprehension span following posterior cerebral artery (PCA) stroke. In addition, the relationship between these attentional parameters and single word reading is investigated, as previous studies have suggested...

  2. Deploying web-based visual exploration tools on the grid

    Energy Technology Data Exchange (ETDEWEB)

    Jankun-Kelly, T.J.; Kreylos, Oliver; Shalf, John; Ma, Kwan-Liu; Hamann, Bernd; Joy, Kenneth; Bethel, E. Wes

    2002-02-01

    We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization.

  3. Visualizing Cloud Properties and Satellite Imagery: A Tool for Visualization and Information Integration

    Science.gov (United States)

    Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.

    2017-12-01

    Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  4. Developing Explanations and Developing Understanding: Students Explain the Phases of the Moon Using Visual Representations

    Science.gov (United States)

    Parnafes, Orit

    2012-01-01

    This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…

  5. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  6. Are multiple visual short-term memory storages necessary to explain the retro-cue effect?

    Science.gov (United States)

    Makovski, Tal

    2012-06-01

    Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.

  7. Information processing in the primate visual system - An integrated systems perspective

    Science.gov (United States)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  8. Information Processing in the Primate Visual System: An Integrated Systems Perspective

    Science.gov (United States)

    van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  9. Visualization and analysis of atomistic simulation data with OVITO–the Open Visualization Tool

    International Nuclear Information System (INIS)

    Stukowski, Alexander

    2010-01-01

    The Open Visualization Tool (OVITO) is a new 3D visualization software designed for post-processing atomistic data obtained from molecular dynamics or Monte Carlo simulations. Unique analysis, editing and animations functions are integrated into its easy-to-use graphical user interface. The software is written in object-oriented C++, controllable via Python scripts and easily extendable through a plug-in interface. It is distributed as open-source software and can be downloaded from the website http://ovito.sourceforge.net/

  10. Defective chromatic and achromatic visual pathways in developmental dyslexia: Cues for an integrated intervention programme.

    Science.gov (United States)

    Bonfiglio, Luca; Bocci, Tommaso; Minichilli, Fabrizio; Crecchi, Alessandra; Barloscio, Davide; Spina, Donata Maria; Rossi, Bruno; Sartucci, Ferdinando

    2017-01-01

    As well as obtaining confirmation of the magnocellular system involvement in developmental dyslexia (DD); the aim was primarily to search for a possible involvement of the parvocellular system; and, furthermore, to complete the assessment of the visual chromatic axis by also analysing the koniocellular system. Visual evoked potentials (VEPs) in response to achromatic stimuli with low luminance contrast and low spatial frequency, and isoluminant red/green and blue/yellow stimuli with high spatial frequency were recorded in 10 dyslexic children and 10 age- and sex-matched, healthy subjects. Dyslexic children showed delayed VEPs to both achromatic stimuli (magnocellular-dorsal stream) and isoluminant red/green and blue/yellow stimuli (parvocellular-ventral and koniocellular streams). To our knowledge, this is the first time that a dysfunction of colour vision has been brought to light in an objective way (i.e., by means of electrophysiological methods) in children with DD. These results give rise to speculation concerning the need for a putative approach for promoting both learning how to read and/or improving existing reading skills of children with or at risk of DD. The working hypothesis would be to combine two integrated interventions in a single programme aimed at fostering the function of both the magnocellular and the parvocellular streams.

  11. The effect of early visual deprivation on the neural bases of multisensory processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Engineering visualization utilizing advanced animation

    Science.gov (United States)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  13. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Assessment of visual communication by information theory

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  15. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  16. Visual cues and listening effort: individual variability.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  17. Visualization of wind farms

    International Nuclear Information System (INIS)

    Pahlke, T.

    1994-01-01

    With the increasing number of wind energy installations the visual impact of single wind turbines or wind parks is a growing problem for landscape preservation, leading to resistance of local authorities and nearby residents against wind energy projects. To increase acceptance and to form a basis for planning considerations, it is necessary to develop instruments for the visualization of planned wind parks, showing their integration in the landscape. Photorealistic montages and computer animation including video sequences may be helpful in 'getting the picture'. (orig.)

  18. 2011 IEEE Visualization Contest winner: Visualizing unsteady vortical behavior of a centrifugal pump.

    Science.gov (United States)

    Otto, Mathias; Kuhn, Alexander; Engelke, Wito; Theisel, Holger

    2012-01-01

    In the 2011 IEEE Visualization Contest, the dataset represented a high-resolution simulation of a centrifugal pump operating below optimal speed. The goal was to find suitable visualization techniques to identify regions of rotating stall that impede the pump's effectiveness. The winning entry split analysis of the pump into three parts based on the pump's functional behavior. It then applied local and integration-based methods to communicate the unsteady flow behavior in different regions of the dataset. This research formed the basis for a comparison of common vortex extractors and more recent methods. In particular, integration-based methods (separation measures, accumulated scalar fields, particle path lines, and advection textures) are well suited to capture the complex time-dependent flow behavior. This video (http://youtu.be/oD7QuabY0oU) shows simulations of unsteady flow in a centrifugal pump.

  19. A link between visual disambiguation and visual memory.

    Science.gov (United States)

    Hegdé, Jay; Kersten, Daniel

    2010-11-10

    Sensory information in the retinal image is typically too ambiguous to support visual object recognition by itself. Theories of visual disambiguation posit that to disambiguate, and thus interpret, the incoming images, the visual system must integrate the sensory information with previous knowledge of the visual world. However, the underlying neural mechanisms remain unclear. Using functional magnetic resonance imaging (fMRI) of human subjects, we have found evidence for functional specialization for storing disambiguating information in memory versus interpreting incoming ambiguous images. Subjects viewed two-tone, "Mooney" images, which are typically ambiguous when seen for the first time but are quickly disambiguated after viewing the corresponding unambiguous color images. Activity in one set of regions, including a region in the medial parietal cortex previously reported to play a key role in Mooney image disambiguation, closely reflected memory for previously seen color images but not the subsequent disambiguation of Mooney images. A second set of regions, including the superior temporal sulcus, showed the opposite pattern, in that their responses closely reflected the subjects' percepts of the disambiguated Mooney images on a stimulus-to-stimulus basis but not the memory of the corresponding color images. Functional connectivity between the two sets of regions was stronger during those trials in which the disambiguated percept was stronger. This functional interaction between brain regions that specialize in storing disambiguating information in memory versus interpreting incoming ambiguous images may represent a general mechanism by which previous knowledge disambiguates visual sensory information.

  20. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  1. Sound improves diminished visual temporal sensitivity in schizophrenia

    NARCIS (Netherlands)

    de Boer-Schellekens, L.; Stekelenburg, J.J.; Maes, J.P.; van Gool, A.R.; Vroomen, J.

    2014-01-01

    Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order

  2. Early vision and visual attention

    OpenAIRE

    Gvozdenović Vasilije P.

    2003-01-01

    The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT), based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constru...

  3. ARTEFACTOS DIALÓGICOS: UNA PROPUESTA PARA INTEGRAR LA EDUCACIÓN DE ARTES MUSICALES Y VISUALES (DIALOGIC ARTIFACTS: A PROPOSAL TO INTEGRATE THE EDUCATION OF MUSICAL AND VISUAL ARTS

    Directory of Open Access Journals (Sweden)

    Arenas Navarrete Mario

    2011-08-01

    Full Text Available Resumen:El propósito de este ensayo es realizar una propuesta para crear unidades de integración de artes musicales y visuales a través de la participación de estudiantes desde 12 a 17 años de edad, aproximadamente, en la creación de “Artefactos Dialógicos”, es decir, esculturas sonoras cinéticas e interactivas. La particularidad de estas instalaciones-esculturas es que establecen y explicitan diversos tipos de diálogos con la naturaleza. Corresponden a la cristalización de un proceso iniciado el año 1986 en el Departamento de Música de la Universidad de La Serena, Chile, caracterizado por la defensa de la transversalidad disciplinar, en oposición al especialismo. Han participado estudiantes universitarios y niños de su Escuela Experimental de Música; profesores, artistas visuales, compositores e investigadores. La pretensión de que estudiantes construyan estos artefactos, conlleva cumplir como requisito, su empoderamiento, el desarrollo de su capacidad de agencia y creatividad, para que, en colaboración con profesores de diferentes asignaturas artísticas, científicas y humanísticas, incluyan en la mirada estética, la configuración material y estructural de estos aparatos, integrando, así, racionalismo y expresividad. Todo ello, visualizado a través del filtro epistémico que otorga la educación intercultural, de tal modo de atrapar y proyectar ancestros, gestos, modos, iconografías,2 idiolectos,3 identidades y patrimonio.Abstract: In this essay, we propose to create units of integration of the musical and visual arts through the participation of students ranging approximately from 12 to 17 years of age, for the creation of the "Dialogical Artifacts", i.e., kinetic and interactive sound sculptures. The particularity of these artifacts lies on the fact that they establish and explicit different types of dialogues with nature from a transversal perspective of the curriculum. This initiative was taken for the first

  4. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  5. Organization, maturation and plasticity of multisensory integration: Insights from computational modelling studies

    Directory of Open Access Journals (Sweden)

    Cristiano eCuppini

    2011-05-01

    Full Text Available In this paper, we present two neural network models - devoted to two specific and widely investigated aspects of multisensory integration - in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development and plasticity of multisensory integration in the brain. The first model considers visual-auditory interaction in a midbrain structure named Superior Colliculus (SC. The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth - develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near space. The model investigates how the extension of peripersonal space - where multimodal integration occurs - may be modified by experience such as use of a tool to interact with the far space. The utility of the modelling approach relies on several aspects: i The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. ii The models may help interpretation of behavioural and psychophysical responses in terms of neural activity and synaptic connections. iii The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.

  6. Attention modulates trans-saccadic integration.

    Science.gov (United States)

    Stewart, Emma E M; Schütz, Alexander C

    2018-01-01

    With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Perceptual integration without conscious access.

    Science.gov (United States)

    Fahrenfort, Johannes J; van Leeuwen, Jonathan; Olivers, Christian N L; Hogendoorn, Hinze

    2017-04-04

    The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is required to complete perceptual integration. To investigate this question, we manipulated access to consciousness using the attentional blink. We show that, behaviorally, the attentional blink impairs conscious decisions about the presence of integrated surface structure from fragmented input. However, despite conscious access being impaired, the ability to decode the presence of integrated percepts remains intact, as shown through multivariate classification analyses of electroencephalogram (EEG) data. In contrast, when disrupting perception through masking, decisions about integrated percepts and decoding of integrated percepts are impaired in tandem, while leaving feedforward representations intact. Together, these data show that access consciousness and perceptual integration can be dissociated.

  8. An attempt to explain the uranium 238 resonance integral discrepancy

    International Nuclear Information System (INIS)

    Tellier, H.; Grandotto, M.

    1978-01-01

    Studies on uranium 238 resonance integral discrepancy were carried out for light water reactor physics. It was shown that using recently published resonance parameters and substituting a multilevel formalism to the usual Breit and Wigner formula reduced the well known discrepancy between two values of the uranium 238 effective resonance integral: the value calculated with the nuclear data and the one deduced from critical experiments. Since the cross section computed with these assumptions agrees quite well with the Oak-Ridge transmission data, it was used to obtain the self-shielding effect and the capture rate in light water lattices. The multiplication factor calculated with this method is found very close to the experimental value. Preliminary results for a set of benchmarks relative to several types of thermal neutron reactors lead to very low discrepancies. The reactivity loss is only 130 x 10 -5 instead of 650 x 10 -5 in the case of the usual libraries and the single level formula

  9. Ibie'ka (Ideographs): Developing Visual Signs for Expressing ...

    African Journals Online (AJOL)

    Visual signs (ideographs) are artistic codified expressions that promote social and cultural integration. They are normally based on popular conventions which over a period of time become generally accepted. In pre-western literate Africa, apart from oral communication, visual codes were employed within social groups.

  10. Explaining Away Intuitions

    Directory of Open Access Journals (Sweden)

    Jonathan Ichikawa

    2009-12-01

    Full Text Available What is it to explain away an intuition? Philosophers regularly attempt to explain intuitions away, but it is often unclear what the success conditions for their project consist in. I attempt to articulate some of these conditions, taking philosophical case studies as guides, and arguing that many attempts to explain away intuitions underestimate the challenge the project of explaining away involves. I will conclude, therefore, that explaining away intuitions is a more difficult task than has sometimes been appreciated; I also suggest, however, that the importance of explaining away intuitions has often been exaggerated.

  11. A Neural Signature of Divisive Normalization at the Level of Multisensory Integration in Primate Cortex.

    Science.gov (United States)

    Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C

    2017-07-19

    Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Mapping scientific frontiers : the quest for knowledge visualization.

    Energy Technology Data Exchange (ETDEWEB)

    Boyack, Kevin W.

    2003-08-01

    - supermassive black holes, cross-domain applications of Pathfinder networks, mass extinction debates, impact of Don Swanson's work, and mad cow disease and vCJD in humans - succeed in explaining how visualization can be used to show the development of, competition between, and eventual acceptance (or replacement) of scientific paradigms. Although not addressed specifically, Chen's work nonetheless makes the persuasive argument that visual maps alone are not sufficient to explain 'the making of science' to a non-expert in a particular field. Rather, expert knowledge is still required to interpret these maps and to explain the paradigms. This combination of visual maps and expert knowledge, used jointly to good effect in the book, becomes a potent means for explaining progress in science to the expert and non-expert alike. Work to extend the GSA technique to explore latent domain knowledge (important work that falls below the citation thresholds typically used in GSA) is also explored here.

  13. Development of an exergy-electrical analogy for visualizing and modeling building integrated energy systems

    International Nuclear Information System (INIS)

    Saloux, E.; Teyssedou, A.; Sorin, M.

    2015-01-01

    Highlights: • The exergy-electrical analogy is developed for energy systems used in buildings. • This analogy has been developed for a complete set of system arrangement options. • Different possibilities of inter-connections are illustrated using analog switches. • Adaptability and utility of the diagram over traditional ones are emphasized. - Abstract: An exergy-electrical analogy, similar to the heat transfer electrical one, is developed and applied to the case of integrated energy systems operating in buildings. Its construction is presented for the case of space heating with electric heaters, heat pumps and solar collectors. The proposed analogy has been applied to a set of system arrangement options proposed for satisfying the building heating demand (space heating, domestic hot water); different alternatives to connect the units have been presented with switches in a visualization scheme. The analogy for such situation has been performed and the study of a solar assisted heat pump using ice storage has been investigated. This diagram directly permits energy paths and their associated exergy destruction to be visualized; hence, sources of irreversibility are identifiable. It can be helpful for the comprehension of the global process and its operation as well as for identifying exergy losses. The method used to construct the diagram makes it easily adaptable to others units or structures or to others models depending on the complexity of the process. The use of switches could be very useful for optimization purposes

  14. N1 enhancement in synesthesia during visual and audio-visual perception in semantic cross-modal conflict situations: an ERP study

    Directory of Open Access Journals (Sweden)

    Christopher eSinke

    2014-01-01

    Full Text Available Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and inanimated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found an enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

  15. Visualizing Contour Trees within Histograms

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Many of the topological features of the isosurfaces of a scalar volume field can be compactly represented by its contour tree. Unfortunately, the contour trees of most real-world volume data sets are too complex to be visualized by dot-and-line diagrams. Therefore, we propose a new visualization...... that is suitable for large contour trees and efficiently conveys the topological structure of the most important isosurface components. This visualization is integrated into a histogram of the volume data; thus, it offers strictly more information than a traditional histogram. We present algorithms...... to automatically compute the graph layout and to calculate appropriate approximations of the contour tree and the surface area of the relevant isosurface components. The benefits of this new visualization are demonstrated with the help of several publicly available volume data sets....

  16. Effect of Common Visual Dysfunctions on Reading.

    Science.gov (United States)

    McPartland, Brian P.

    1985-01-01

    Six common visual dysfunctions are briefly explained and their relationships to reading noted: (1) ametropia, refractive error; (2) inaccurate saccades, the small jumping eye movements used in reading; (3) inefficient binocularity/fusion; (4) insufficient convergence/divergence; (5) heterophoria, imbalance in extra-ocular muscles; and (6)…

  17. Visual Sample Plan (VSP) - FIELDS Integration

    Energy Technology Data Exchange (ETDEWEB)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Hassig, Nancy L.; Carlson, Deborah K.; Bing-Canar, John; Cooper, Brian; Roth, Chuck

    2003-04-19

    Two software packages, VSP 2.1 and FIELDS 3.5, are being used by environmental scientists to plan the number and type of samples required to meet project objectives, display those samples on maps, query a database of past sample results, produce spatial models of the data, and analyze the data in order to arrive at defensible decisions. VSP 2.0 is an interactive tool to calculate optimal sample size and optimal sample location based on user goals, risk tolerance, and variability in the environment and in lab methods. FIELDS 3.0 is a set of tools to explore the sample results in a variety of ways to make defensible decisions with quantified levels of risk and uncertainty. However, FIELDS 3.0 has a small sample design module. VSP 2.0, on the other hand, has over 20 sampling goals, allowing the user to input site-specific assumptions such as non-normality of sample results, separate variability between field and laboratory measurements, make two-sample comparisons, perform confidence interval estimation, use sequential search sampling methods, and much more. Over 1,000 copies of VSP are in use today. FIELDS is used in nine of the ten U.S. EPA regions, by state regulatory agencies, and most recently by several international countries. Both software packages have been peer-reviewed, enjoy broad usage, and have been accepted by regulatory agencies as well as site project managers as key tools to help collect data and make environmental cleanup decisions. Recently, the two software packages were integrated, allowing the user to take advantage of the many design options of VSP, and the analysis and modeling options of FIELDS. The transition between the two is simple for the user – VSP can be called from within FIELDS, automatically passing a map to VSP and automatically retrieving sample locations and design information when the user returns to FIELDS. This paper will describe the integration, give a demonstration of the integrated package, and give users download

  18. Development of Visual CINDER Code with Visual C⧣.NET

    International Nuclear Information System (INIS)

    Kim, Oyeon

    2016-01-01

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study

  19. Development of Visual CINDER Code with Visual C⧣.NET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of)

    2016-10-15

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study.

  20. Frequency modulation of neural oscillations according to visual task demands.

    Science.gov (United States)

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  1. Visual DMDX: A web-based authoring tool for DMDX, a Windows display program with millisecond accuracy.

    Science.gov (United States)

    Garaizar, Pablo; Reips, Ulf-Dietrich

    2015-09-01

    DMDX is a software package for the experimental control and timing of stimulus display for Microsoft Windows systems. DMDX is reliable, flexible, millisecond accurate, and can be downloaded free of charge; therefore it has become very popular among experimental researchers. However, setting up a DMDX-based experiment is burdensome because of its command-based interface. Further, DMDX relies on RTF files in which parts of the stimuli, design, and procedure of an experiment are defined in a complicated (DMASTR-compatible) syntax. Other experiment software, such as E-Prime, Psychopy, and WEXTOR, became successful as a result of integrated visual authoring tools. Such an intuitive interface was lacking for DMDX. We therefore created and present here Visual DMDX (http://visualdmdx.com/), a HTML5-based web interface to set up experiments and export them to DMDX item files format in RTF. Visual DMDX offers most of the features available from the rich DMDX/DMASTR syntax, and it is a useful tool to support researchers who are new to DMDX. Both old and modern versions of DMDX syntax are supported. Further, with Visual DMDX, we go beyond DMDX by having added export to JSON (a versatile web format), easy backup, and a preview option for experiments. In two examples, one experiment each on lexical decision making and affective priming, we explain in a step-by-step fashion how to create experiments using Visual DMDX. We release Visual DMDX under an open-source license to foster collaboration in its continuous improvement.

  2. Writing virtual environments for software visualization

    CERN Document Server

    Jeffery, Clinton

    2015-01-01

    This book describes the software for creating networked, 3D multi-user virtual environments that allow users to create and remotely share visualizations of program behavior. The authors cover the major features of collaborative virtual environments and how to program them in a very high level language, and show how visualization can enable important advances in our ability to understand and reduce the costs of maintaining software. The book also examines the application of popular game-like software technologies.   • Discusses the acquisition of program behavior data to be visualized • Demonstrates the integration of multiple 2D and 3D dynamic views within a 3Dscene • Presents the network messaging capabilities to share those visualizations

  3. Development of ASME Code Section 11 visual examination requirements

    International Nuclear Information System (INIS)

    Cook, J.F.

    1990-01-01

    Section XI of the American Society for Mechanical Engineers Boiler and Pressure Vessel Code (ASME Code) defines three types of nondestructive examinations, visual, surface, and volumetric. Visual examination is important since it is the primary examination method for many safety-related components and systems and is also used as a backup examination for the components and systems which receive surface or volumetric examinations. Recent activity in the Section XI Code organization to improve the rules for visual examinations is reviewed and the technical basis for the new rules, which cover illumination, vision acuity, and performance demonstration, is explained

  4. Recovery of visual-field defects after occipital lobe infarction: a perimetric study.

    Science.gov (United States)

    Çelebisoy, Mehmet; Çelebisoy, Neşe; Bayam, Ece; Köse, Timur

    2011-06-01

    To assess the temporal course of homonymous visual-field defects due to occipital lobe infarction, by using automated perimetry. 32 patients with ischaemic infarction of the occipital lobe were studied prospectively, using a Humphrey Visual Field Analyser II. The visual field of each eye was divided into central, paracentral and peripheral zones. The mean visual sensitivity of each zone was calculated and used for the statistical analysis. The results of the initial examination, performed within 2 weeks of stroke, were compared with the results of the sixth-month control. The lesions were assigned to the localisations, optic radiation, striate cortex, occipital pole and occipital convexity, by MRI. A statistically significant improvement was noted, especially for the lower quadrants. Lesions of the occipital pole and convexity were not significantly associated with visual-field recovery. However, involvement of the striate cortex and extensive lesions involving all the areas studied was significantly associated with poor prognosis. Homonymous visual-field defects in our patients improved within 6 months. Restoration of the lower quadrants and especially the peripheral zones was noted. Incomplete damage to the striate cortex, which has a varying pattern of vascular supply, could explain this finding. Magnification factor theory, which is the increment of the receptive-field size of striate cortex cells with visual-field eccentricity, may explain the more significant improvement in the peripheral zones.

  5. Storytelling and Visualization: An Extended Survey

    Directory of Open Access Journals (Sweden)

    Chao Tong

    2018-03-01

    Full Text Available Throughout history, storytelling has been an effective way of conveying information and knowledge. In the field of visualization, storytelling is rapidly gaining momentum and evolving cutting-edge techniques that enhance understanding. Many communities have commented on the importance of storytelling in data visualization. Storytellers tend to be integrating complex visualizations into their narratives in growing numbers. In this paper, we present a survey of storytelling literature in visualization and present an overview of the common and important elements in storytelling visualization. We also describe the challenges in this field as well as a novel classification of the literature on storytelling in visualization. Our classification scheme highlights the open and unsolved problems in this field as well as the more mature storytelling sub-fields. The benefits offer a concise overview and a starting point into this rapidly evolving research trend and provide a deeper understanding of this topic.

  6. 2011 IEEE Visualization Contest Winner: Visualizing Unsteady Vortical Behavior of a Centrifugal Pump

    KAUST Repository

    Otto, Mathias

    2012-09-01

    In the 2011 IEEE Visualization Contest, the dataset represented a high-resolution simulation of a centrifugal pump operating below optimal speed. The goal was to find suitable visualization techniques to identify regions of rotating stall that impede the pump\\'s effectiveness. The winning entry split analysis of the pump into three parts based on the pump\\'s functional behavior. It then applied local and integration-based methods to communicate the unsteady flow behavior in different regions of the dataset. This research formed the basis for a comparison of common vortex extractors and more recent methods. In particular, integration-based methods (separation measures, accumulated scalar fields, particle path lines, and advection textures) are well suited to capture the complex time-dependent flow behavior. This video (http://youtu.be/ oD7QuabY0oU) shows simulations of unsteady flow in a centrifugal pump. © 2012 IEEE.

  7. Storage and binding of object features in visual working memory

    OpenAIRE

    Bays, Paul M; Wu, Emma Y; Husain, Masud

    2010-01-01

    An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory.

  8. Interactive Visual Intervention Planning: Interactive Visualization for Intervention Planning in Particle Accelerator Environments with Ionizing Radiation

    CERN Document Server

    Fabry, Thomas; Feral, Bruno

    2013-01-01

    Intervention planning is crucial for maintenance operations in particle accelerator environments with ionizing radiation, during which the radiation dose contracted by maintenance workers should be reduced to a minimum. In this context, we discuss the visualization aspects of a new software tool, which integrates interactive exploration of a scene depicting an accelerator facility augmented with residual radiation level simulations, with the visualization of intervention data such as the followed trajectory and maintenance tasks. The visualization of each of these aspects has its effect on the final predicted contracted radiation dose. In this context, we explore the possible benefits of a user study, with the goal of enhancing the visual conditions in which the intervention planner using the software tool is minimizing the radiation dose.

  9. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    Science.gov (United States)

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  10. Introduction to Vector Field Visualization

    Science.gov (United States)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  11. Long-Term Memory Search across the Visual Brain

    Directory of Open Access Journals (Sweden)

    Milan Fedurco

    2012-01-01

    Full Text Available Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI in humans together with molecular biology studies (animal models aiming to understand how the retinal image gets transformed into so-called visual (retinotropic maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR and β2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF and protein kinase PKMζ.

  12. Neurophysiology of visual aura in migraine

    International Nuclear Information System (INIS)

    Shibata, Koichi

    2007-01-01

    Visual processing in migraine has been targeted because the visual symptoms that are commonly associated with attack, either in the form of aura or other more subtle symptoms, indicate that the visual pathways are involved in migrainous pathophysiology. The visual aura of the migraine attack has been explained by the cortical spreading depression (CSD) of Leao, neuroelectric event beginning in the occipital cortex and propagating into contiguous brain region. Clinical observations suggest that hyperexcitability occurs not only during the attack, typically in the form of photophobia, but also between attacks. Numerous human neuroimaging, neurophysiological and psychophysical studies have identified differences in cortical visual processing in migraine. The possibility of imaging the typical visual aura with BOLD functional MRI has revealed multiple neurovascular events in the occipital cortex within a single attack that closely resemble CSD. As transient synchronized neuronal excitation precedes CSD, changes in cortical excitability underlie the migraine attack. Independent evidence for altered neuronal excitability in migraineurs between attacks emerges from visual evoked potentials (VEPs) and transcranial magnetic stimulation (TMS), recordings of cortical potentials and psychophysics. Recently, both TMS and psychophysical studies measuring visual performance in migraineurs have used measures which presumably measure primary visual (V1) and visual association cortex. Our VEP and blink reflex study showed that migraine patients exhibiting allodynia might show central sensitization of braistem trigeminal neuron and had contrast modulation dysfunction during the cortical visual processing of V1 and visual association cortex in-between attacks. In pathophysiology of migraine, these neurophysiological and psychophysical studies indicate that abnormal visual and trigeminal hyperexcitability might persist between migraine attacks. The influence of migraine on cortical

  13. Storage of features, conjunctions and objects in visual working memory.

    Science.gov (United States)

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  14. Spatial integration and cortical dynamics.

    Science.gov (United States)

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-23

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

  15. The role of human ventral visual cortex in motion perception

    Science.gov (United States)

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  16. Towards academic generativity: working collaboratively with visual ...

    African Journals Online (AJOL)

    We follow this with a collaborative reflection, in which we explain how we have noticed similarities in both the connotative and denotative histories of our artefacts and gained an alternative perspective on our interests and practices as educational researchers. The article demonstrates how, by working with visual artefacts ...

  17. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  18. Compromised Integrity of Central Visual Pathways in Patients With Macular Degeneration.

    Science.gov (United States)

    Malania, Maka; Konrad, Julia; Jägle, Herbert; Werner, John S; Greenlee, Mark W

    2017-06-01

    Macular degeneration (MD) affects the central retina and leads to gradual loss of foveal vision. Although, photoreceptors are primarily affected in MD, the retinal nerve fiber layer (RNFL) and central visual pathways may also be altered subsequent to photoreceptor degeneration. Here we investigate whether retinal damage caused by MD alters microstructural properties of visual pathways using diffusion-weighted magnetic resonance imaging. Six MD patients and six healthy control subjects participated in the study. Retinal images were obtained by spectral-domain optical coherence tomography (SD-OCT). Diffusion tensor images (DTI) and high-resolution T1-weighted structural images were collected for each subject. We used diffusion-based tensor modeling and probabilistic fiber tractography to identify the optic tract (OT) and optic radiations (OR), as well as nonvisual pathways (corticospinal tract and anterior fibers of corpus callosum). Fractional anisotropy (FA) and axial and radial diffusivity values (AD, RD) were calculated along the nonvisual and visual pathways. Measurement of RNFL thickness reveals that the temporal circumpapillary retinal nerve fiber layer was significantly thinner in eyes with macular degeneration than normal. While we did not find significant differences in diffusion properties in nonvisual pathways, patients showed significant changes in diffusion scalars (FA, RD, and AD) both in OT and OR. The results indicate that the RNFL and the white matter of the visual pathways are significantly altered in MD patients. Damage to the photoreceptors in MD leads to atrophy of the ganglion cell axons and to corresponding changes in microstructural properties of central visual pathways.

  19. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  20. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 3 : use of scanning LiDAR in structural evaluation of bridges.

    Science.gov (United States)

    2009-12-01

    This volume introduces several applications of remote bridge inspection technologies studied in : this Integrated Remote Sensing and Visualization (IRSV) study using ground-based LiDAR : systems. In particular, the application of terrestrial LiDAR fo...

  1. Corridor One: An Integrated Distance Visualization Environment for SSI and ASCI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Rick [ANL, PI; Leigh, Jason [UIC, PI

    2002-07-14

    Scenarios describe realistic uses of DVC/Distance technologies in several years. Four scenarios are described: Distributed Decision Making; Remote Interactive Computing; Remote Visualization: (a) Remote Immersive Visualization and (b) Remote Scientific Visualization; Remote Virtual Prototyping. Scenarios serve as drivers for the road maps and enable us to check that the functionality and technology in the road maps match application needs. There are four major DVC/Distance technology areas we cover: Networking and QoS; Remote Computing; Remote Visualization; Remote Data. Each road ma consists of two parts, a functionality matrix (what can be done) and a technology matrix (underlying technology). That is, functionality matrices show the desired operational characteristics, while technology matrices show the underlying technology needed. In practice, there isn't always a clean break between functionality and technology, but it still seems useful to try and separate things this way.

  2. Visual Motor Integration as a Screener for Responders and Non-Responders in Preschool and Early School Years: Implications for Inclusive Assessment in Oman

    Science.gov (United States)

    Emam, Mahmoud Mohamed; Kazem, Ali Mahdi

    2016-01-01

    Visual motor integration (VMI) is the ability of the eyes and hands to work together in smooth, efficient patterns. In Oman, there are few effective methods to assess VMI skills in children in inclusive settings. The current study investigated the performance of preschool and early school years responders and non-responders on a VMI test. The full…

  3. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  4. Where vision meets memory: prefrontal-posterior networks for visual object constancy during categorization and recognition.

    Science.gov (United States)

    Schendan, Haline E; Stern, Chantal E

    2008-07-01

    Objects seen from unusual relative to more canonical views require more time to categorize and recognize, and, according to object model verification theories, additionally recruit prefrontal processes for cognitive control that interact with parietal processes for mental rotation. To test this using functional magnetic resonance imaging, people categorized and recognized known objects from unusual and canonical views. Canonical views activated some components of a default network more on categorization than recognition. Activation to unusual views showed that both ventral and dorsal visual pathways, and prefrontal cortex, have key roles in visual object constancy. Unusual views activated object-sensitive and mental rotation (and not saccade) regions in ventrocaudal intraparietal, transverse occipital, and inferotemporal sulci, and ventral premotor cortex for verification processes of model testing on any task. A collateral-lingual sulci "place" area activated for mental rotation, working memory, and unusual views on correct recognition and categorization trials to accomplish detailed spatial matching. Ventrolateral prefrontal cortex and object-sensitive lateral occipital sulcus activated for mental rotation and unusual views on categorization more than recognition, supporting verification processes of model prediction. This visual knowledge framework integrates vision and memory theories to explain how distinct prefrontal-posterior networks enable meaningful interactions with objects in diverse situations.

  5. Three-dimensional visualization of functional brain tissue and functional magnetic resonance imaging-integrated neuronavigation in the resection of brain tumor adjacent to motor cortex

    International Nuclear Information System (INIS)

    Han Tong; Cui Shimin; Tong Xiaoguang; Liu Li; Xue Kai; Liu Meili; Liang Siquan; Zhang Yunting; Zhi Dashi

    2011-01-01

    Objective: To assess the value of three -dimensional visualization of functional brain tissue and the functional magnetic resonance imaging (fMRI)-integrated neuronavigation in the resection of brain tumor adjacent to motor cortex. Method: Sixty patients with tumor located in the central sulcus were enrolled. Thirty patients were randomly assigned to function group and 30 to control group. Patients in function group underwent fMRI to localize the functional brain tissues. Then the function information was transferred to the neurosurgical navigator. The patients in control group underwent surgery with navigation without function information. The therapeutic effect, excision rate. improvement of motor function, and survival quality during follow-up were analyzed. Result: All patients in function group were accomplished visualization of functional brain tissues and fMRI-integrated neuronavigation. The locations of tumors, central sulcus and motor cortex were marked during the operation. The fMRI -integrated information played a great role in both pre- and post-operation. Pre-operation: designing the location of the skin flap and window bone, determining the relationship between the tumor and motor cortex, and designing the pathway for the resection. Post- operation: real-time navigation of relationship between the tumor and motor cortex, assisting to localize the motor cortex using interoperation ultra-sound for correcting the displacement by the CSF outflow and collapsing tumor. The patients in the function group had better results than the patients in the control group in therapeutic effect (u=2.646, P=0.008), excision rate (χ = 7.200, P<0.01), improvement of motor function (u=2.231, P=0.026), and survival quality (KPS u c = 2.664, P=0.008; Zubrod -ECOG -WHO u c =2.135, P=0.033). Conclusions: Using preoperative three -dimensional visualization of cerebral function tissue and the fMRI-integrated neuronavigation technology, combining intraoperative accurate

  6. Research for the design of visual fatigue based on the computer visual communication

    Science.gov (United States)

    Deng, Hu-Bin; Ding, Bao-min

    2013-03-01

    With the era of rapid development of computer networks. The role of network communication in the social, economic, political, become more and more important and suggested their special role. The computer network communicat ion through the modern media and byway of the visual communication effect the public of the emotional, spiritual, career and other aspects of the life. While its rapid growth also brought some problems, It is that their message across to the public, its design did not pass a relat ively perfect manifestation to express the informat ion. So this not only leads to convey the error message, but also to cause the physical and psychological fatigue for the audiences. It is said that the visual fatigue. In order to reduce the fatigue when people obtain the useful information in using computer. Let the audience in a short time to obtain the most useful informat ion, this article gave a detailed account of its causes, and propose effective solutions and, through the specific examples to explain it, also in the future computer design visual communicat ion applications development prospect.

  7. Storytelling in Interactive 3D Geographic Visualization Systems

    Directory of Open Access Journals (Sweden)

    Matthias Thöny

    2018-03-01

    Full Text Available The objective of interactive geographic maps is to provide geographic information to a large audience in a captivating and intuitive way. Storytelling helps to create exciting experiences and to explain complex or otherwise hidden relationships of geospatial data. Furthermore, interactive 3D applications offer a wide range of attractive elements for advanced visual story creation and offer the possibility to convey the same story in many different ways. In this paper, we discuss and analyze storytelling techniques in 3D geographic visualizations so that authors and developers working with geospatial data can use these techniques to conceptualize their visualization and interaction design. Finally, we outline two examples which apply the given concepts.

  8. Using Visual Analogies To Teach Introductory Statistical Concepts

    Directory of Open Access Journals (Sweden)

    Jessica S. Ancker

    2017-07-01

    Full Text Available Introductory statistical concepts are some of the most challenging to convey in quantitative literacy courses. Analogies supplemented by visual illustrations can be highly effective teaching tools. This literature review shows that to exploit the power of analogies, teachers must select analogies familiar to the audience, explicitly link the analog with the target concept, and avert misconceptions by explaining where the analogy fails. We provide guidance for instructors and a series of visual analogies for use in teaching medical and health statistics.

  9. Is Visual Imagery Really Visual? Overlooked Evidence from Neuropsychology.

    Science.gov (United States)

    1987-08-07

    the study of imagery. British Journal of Psychology, 47 101-114 Bauer,R. M.. & Rubens. A B (1985). Agnosia In K. M. Heilman & E. Valenstein (Ed Clinical...Neuropsychology. New York: Oxford University Press. 2nd edition. Beauvois. M.F . & Saillant. B (1985) Optic aphasia for colours and colour agnosia A...integrative visual agnosia . Brain, Roland. P.E. (1982). Cortical regulation of selective attention in man. Journal of Neuroohysiology, 48. 1059-1078

  10. From Big Data to Big Displays High-Performance Visualization at Blue Brain

    KAUST Repository

    Eilemann, Stefan

    2017-10-19

    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.

  11. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    Science.gov (United States)

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  12. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  13. Explaining Physics – What Skills does a good Explainer Need?

    CERN Multimedia

    CERN. Geneva; Bartels, Hauke

    2018-01-01

    Explaining physics in a way that it is both scientifically correct and comprehensible is a highly demanding practice. But are explanations an effective way to teach physics? Under which circumstances should a physics teacher explain – and is there such a thing as a guideline for effective instructional explanations? Of course, explaining is more than just presenting content knowledge in clear language – but what more? In our talk, we want to discuss empirical studies on instructional explanations from science education and psychology to address these questions. Among other things, we will refer to results from a large study aiming to research whether teacher education contributes to the development of explaining skills. Besides, we will give insights into a project that seeks to measure explaining skills with an interactive online test instrument.

  14. Information, entropy and fidelity in visual communication

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering and display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  15. Information, entropy, and fidelity in visual communication

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1992-10-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering an display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  16. Savant Genome Browser 2: visualization and analysis for population-scale genomics.

    Science.gov (United States)

    Fiume, Marc; Smith, Eric J M; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M; Robinson, Mark D; Wodak, Shoshana J; Brudno, Michael

    2012-07-01

    High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com.

  17. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability.

    Science.gov (United States)

    Rengier, Fabian; Häfner, Matthias F; Unterhinninghofen, Roland; Nawrotzki, Ralph; Kirsch, Joachim; Kauczor, Hans-Ulrich; Giesel, Frederik L

    2013-08-01

    Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students' deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. The total number of correctly answered questions improved from 36.9±4.8 to 49.5±5.4 (pability by 11.3% (psoftware into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby even diagnostic skills for imaging modalities not included in the course. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    Science.gov (United States)

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  19. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans

    DEFF Research Database (Denmark)

    Putzar, L.; Gondan, Matthias; Röder, B.

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant...... information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions...... in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth....

  20. Organization and visualization of medical images in radiotherapy

    International Nuclear Information System (INIS)

    Lorang, T.

    2001-05-01

    In modern radiotherapy, various imaging equipment is used to acquire views from inside human bodies. Tomographic imaging equipment is acquiring stacks of cross-sectional images, software implementations derive three-dimensional volumes from planar images to allow for visualization of reconstructed cross-sections at any orientation and location and higher-level visualization systems allow for transparent views and surface rendering. Of upcoming interest in radiotherapy is mutual information, the integration of information from multiple imaging equipment res. from the same imaging equipment at different time stamps and varying acquisition parameters. Huge amounts of images are acquired nowadays at radiotherapy centers, requiring organization of images with respect to patient, acquisition and equipment to allow for visualization of images in a comparative and integrative manner. Especially for integration of image information from different equipment, geometrical information is required to allow for registration of images res. volumes. DICOM 3.0 has been introduced as a standard for information interchange with respect to medical imaging. Geometric information of cross-sections, demographic information of patients and medical information of acquisitions and equipment are covered by this standard, allowing for a high-level automation with respect to organization and visualization of medical images. Reconstructing cross-sectional images from volumes at any orientation and location is required for the purpose of registration and multi-planar views. Resampling and addressing of discrete volume data need be implemented efficiently to allow for simultaneous visualization of multiple cross-sectional images, especially with respect to multiple, non-isotropy volume data sets. (author)

  1. Change in vision, visual disability, and health after cataract surgery.

    Science.gov (United States)

    Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav

    2013-04-01

    Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.

  2. THE KINETICS OF MULTIBRANCH INTEGRATION ON THE DENDRITIC ARBOR OF CA1 PYRAMIDAL NEURONS

    Directory of Open Access Journals (Sweden)

    Sunggu eYang

    2014-05-01

    Full Text Available The process by which synaptic inputs separated in time and space are integrated by the dendritic arbor to produce a sequence of action potentials is among the most fundamental signal transformations that takes place within the central nervous system. Some aspects of this complex process, such as integration at the level of individual dendritic branches, have been extensively studied. But other aspects, such as how inputs from multiple branches are combined, and the kinetics of that integration have not been systematically examined. Using a 3D digital holographic photolysis technique to overcome the challenges posed by the complexities of the 3D anatomy of the dendritic arbor of CA1 pyramidal neurons for conventional photolysis, we show that integration on a single dendrite is fundamentally different from that on multiple dendrites. Multibranch integration occurring at oblique and basal dendrites allows somatic action potential firing of the cell to faithfully follow the driving stimuli over a significantly wider frequency range than what is possible with single branch integration. However, multibranch integration requires greater input strength to drive the somatic action potentials. This tradeoff between sensitivity and kinetics may explain the puzzling report of the predominance of multibranch, rather than single branch, integration from in vivo recordings during presentation of visual stimuli.

  3. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    Science.gov (United States)

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  4. An introduction to Space Weather Integrated Modeling

    Science.gov (United States)

    Zhong, D.; Feng, X.

    2012-12-01

    The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.

  5. Integration of Multiple Cues for Visual Gloss Evaluation

    OpenAIRE

    Leloup, Frédéric B.; Hanselaer, Peter; Pointer, Michael R.; Dutré, Philip

    2012-01-01

    This study reports on a psychophysical experiment with real stimuli that differ in multiple visual gloss criteria. Four samples were presented to 15 observers under different conditions of illumination, resulting in a series of 16 stimuli. Through pairwise comparisons, a gloss scale was derived and the observers' strategy to evaluate gloss was investigated. The preference probability matrix P indicated a dichotomy among observers. A first group of observers used the distinctnes...

  6. Demands Set Upon Modern Cartographic Visualization

    Directory of Open Access Journals (Sweden)

    Stanislav Frangeš

    2007-05-01

    Full Text Available Scientific cartography has the task to develop and research new methods of cartographic visualization. General demands are set upon modern cartographic visualization, which encompasses digital cartography and computer graphics: legibility, clearness, accuracy, plainness and aesthetics. In this paper, it is explained in detail what demands should be met in order to satisfy the general demands set. In order to satisfy the demand of legibility, one should respect conditions of minimal sizes, appropriate graphical density and better differentiation of known features. Demand of clearness needs to be met by fulfilling conditions of simplicity, contrasting quality and layer arrangement of cartographic representation. Accuracy, as the demand on cartographic visualization, can be divided into positioning accuracy and accuracy signs. For fulfilling the demand of plainness, the conditions of symbolism, traditionalism and hierarchic organization should be met. Demand of aesthetics will be met if the conditions of beauty and harmony are fulfilled.

  7. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  8. The visual air quality predicted by conventional and scanning teleradiometers and an integrating nephelometer

    Energy Technology Data Exchange (ETDEWEB)

    Malm, W [U.S. Environmental Protection Agency, Las Vegas, NV; Pitchford, A; Tree, R; Walther, E; Pearson, M; Archer, S

    1981-12-01

    Many Class I areas have unique vistas which require an observer to look over complex terrain containing basins, valleys, and canyons. These topographic features tend to form pollution ''basins'' and ''corridors'' that trap and funnel air pollutants under certain meteorological conditions. For example, on numerous days, layers of haze in the San Juan River Basin obscure various vista elements including the Chuska Mountains as viewed from Mesa Verde National Park, CO. Measrements by an integrating nephelometer and conventional teleradiometer at one location in Mesa Verde do not quantify inhomogeneities. In this paper, data from these instruments are compated to data derived from scanning teleradiometer measurements of photographic slide images. The slides, surrogates of the real three-dimensional scene, were projected and scanned to determine relative sky and vista radiance at 40 points within a vertical slice of the vista. Comparison of the corresponding visual range data sets for each instrument for September and December 1979 demonstrates the utility of the scanning teleradiometer.

  9. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  10. The Role of Visual-Spatial Abilities in Dyslexia: Age Differences in Children's Reading?

    Science.gov (United States)

    Giovagnoli, Giulia; Vicari, Stefano; Tomassetti, Serena; Menghini, Deny

    2016-01-01

    Reading is a highly complex process in which integrative neurocognitive functions are required. Visual-spatial abilities play a pivotal role because of the multi-faceted visual sensory processing involved in reading. Several studies show that children with developmental dyslexia (DD) fail to develop effective visual strategies and that some reading difficulties are linked to visual-spatial deficits. However, the relationship between visual-spatial skills and reading abilities is still a controversial issue. Crucially, the role that age plays has not been investigated in depth in this population, and it is still not clear if visual-spatial abilities differ across educational stages in DD. The aim of the present study was to investigate visual-spatial abilities in children with DD and in age-matched normal readers (NR) according to different educational stages: in children attending primary school and in children and adolescents attending secondary school. Moreover, in order to verify whether visual-spatial measures could predict reading performance, a regression analysis has been performed in younger and older children. The results showed that younger children with DD performed significantly worse than NR in a mental rotation task, a more-local visual-spatial task, a more-global visual-perceptual task and a visual-motor integration task. However, older children with DD showed deficits in the more-global visual-perceptual task, in a mental rotation task and in a visual attention task. In younger children, the regression analysis documented that reading abilities are predicted by the visual-motor integration task, while in older children only the more-global visual-perceptual task predicted reading performances. Present findings showed that visual-spatial deficits in children with DD were age-dependent and that visual-spatial abilities engaged in reading varied across different educational stages. In order to better understand their potential role in affecting reading

  11. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  12. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  13. Postdictive modulation of visual orientation.

    Science.gov (United States)

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  14. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    Science.gov (United States)

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  15. Investigating the Visual-Motor Integration Skills of 60-72-Month-Old Children at High and Low Socio-Economic Status as Regard the Age Factor

    Science.gov (United States)

    Ercan, Zülfiye Gül; Ahmetoglu, Emine; Aral, Neriman

    2011-01-01

    This study aims to define whether age creates any differences in the visual-motor integration skills of 60-72 months old children at low and high socio-economic status. The study was conducted on a total of 148 children consisting of 78 children representing low socio-economic status and 70 children representing high socio-economic status in the…

  16. Visual working memory as visual attention sustained internally over time.

    Science.gov (United States)

    Chun, Marvin M

    2011-05-01

    Visual working memory and visual attention are intimately related, such that working memory encoding and maintenance reflects actively sustained attention to a limited number of visual objects and events important for ongoing cognition and action. Although attention is typically considered to operate over perceptual input, a recent taxonomy proposes to additionally consider how attention can be directed to internal perceptual representations in the absence of sensory input, as well as other internal memories, choices, and thoughts (Chun, Golomb, & Turk-Browne, 2011). Such internal attention enables prolonged binding of features into integrated objects, along with enhancement of relevant sensory mechanisms. These processes are all limited in capacity, although different types of working memory and attention, such as spatial vs. object processing, operate independently with separate capacity. Overall, the success of maintenance depends on the ability to inhibit both external (perceptual) and internal (cognitive) distraction. Working memory is the interface by which attentional mechanisms select and actively maintain relevant perceptual information from the external world as internal representations within the mind. Copyright © 2011. Published by Elsevier Ltd.

  17. Cloud-based Networked Visual Servo Control

    OpenAIRE

    Wu, Haiyan; Lu, Lei; Chen, Chih-Chung; Hirche, Sandra; Kühnlenz, Kolja

    2013-01-01

    The performance of vision-based control systems, in particular of highly dynamic vision-based motion control systems, is often limited by the low sampling rate of the visual feedback caused by the long image processing time. In order to overcome this problem, the networked visual servo control, which integrates networked computational resources for cloud image processing, is considered in this article. The main contributions of this article are i) a real-time transport protocol for transmitti...

  18. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  19. Application for TJ-II Signals Visualization: User's Guide

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A. B.; Cremy, C.; Vega, J.

    2000-01-01

    In this documents are described the functionalities of the application developed by the Data Acquisition Group for TJ-II signal visualization. There are two versions of the application, the On-line version, used for signal visualization during TJ-II operation, and the Off-line version, used for signal visualization without TJ-II operation. Both versions of the application consist in a graphical user interface developed for X/Motif, in which most of the actions can be done using the mouse buttons. The functionalities of both versions of the application are described in this user's guide, beginning at the application start-up and explaining in detail all the options that it provides and the actions that can be done with each graphic control. (Author) 8 refs

  20. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  1. Students and teachers as developers of visual designs with AR for visual arts education

    DEFF Research Database (Denmark)

    Buhl, Mie

    mobile technology and Augmented Reality (AR). The project exemplified a strategy for visual learning design where diverse stakeholders’ competences were involved throughout the design process. Visual arts education in Denmark is challenged by the national curricula’s requirement of integrating digital...... technology in visual learning processes. Since 1984, information technology has been mandatory in the school subject as well as in teacher education (ref.). Still, many digital resources such as Photoshop and Paint, offer remediating more traditional means for pictorial production, which give rise......). Design Based Research and Educational Technology: Rethinking Technology and the Research Agenda. Educational Technology & Society, 11 (4), 2008, pp. 29–40Beetham, H. (2007): An approach to learning activity design. In: Beetham, H. & Sharpe, R. (eds.) Rethinking pedagogy for a digital age. Designing...

  2. CAUSES OF VISUAL DISABILITY IN PATIENTS WITH VISUAL DISABILITY CERTIFICATES OBTAINED IN A TERTIARY CARE HOSPITAL IN MUMBAI

    Directory of Open Access Journals (Sweden)

    Vikas Vijaykumar Kamat

    2016-12-01

    Full Text Available BACKGROUND Visual disability is a major public health problem in developing countries. Ocular diseases cause partial or total blindness. Causes can be treatable or non-treatable. Non-treatable causes lead to permanent visual disability. Persons with disabilities are given certificates mentioning percentage of disability after they demand certificates for various benefits. MATERIALS AND METHODS Records of the individuals who had been issued visual disability certificates during the period of 1 st March 2011 to 30 th June 2013 were obtained from Medical Records Office of the hospital and the information was analysed. RESULTS Out of 132 individuals with visual disability certificates, 97 were males and 35 were females. Avoidable causes of visual impairment were found in 43.18% individuals who were with corneal opacity, diabetic retinopathy, glaucoma, traumatic retinal detachment and postoperative retinal detachment. Unavoidable causes were found in 56.82% individuals who were with congenital diseases, optic nerve atrophy, hereditary causes, retinitis pigmentosa and age-related macular degeneration. Maximum numbers of individuals were issued certificates of 40% visual disability and least being 20% visual disability. Maximum number of individuals (48.49% demanded disability certificates for benefit in jobs. CONCLUSION High number of congenital diseases of eye explains the need of genetic counselling. Gender-based inequality for getting visual disability certificates should be minimised through awareness and education of people. Avoiding trauma to eyes can reduce the visual disability due to corneal scarring and infections in large extent. Early diagnosis and treatment is necessary to prevent blindness from avoidable causes like diabetic retinopathy, glaucoma and retinopathy of prematurity.

  3. Measuring temporal summation in visual detection with a single-photon source.

    Science.gov (United States)

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Does Bilateral Market and Financial Integration Explains International Co-Movement Patterns1

    Directory of Open Access Journals (Sweden)

    Mobeen Ur Rehman

    2016-05-01

    Full Text Available This study aims to explore the relationship between market integration, foreign portfolio equity holding and inflation rates on international stock market linkages between Pakistan and India. To measure stock equity interlinkage, we constructed international co-movement index through rolling beta estimation. Market integration variable between these two countries is constructed using the International Capital Asset Pricing Model (ICAPM. To check the impact of market integration, foreign portfolio equity holding and inflation rate on Pakistan-Indian stock market co-movement, we applied autoregressive distributed lag (ARDL estimation. ARDL estimation is applied due to different stationarity levels of the included variables. The level of convergence speed is measured by the introduction of error correction term (ECT followed by variance decomposition analysis. Results of the study indicated presence of long term relationship among the included variables along with significance variance in bilateral co-movement due to inflation rate differential. The significance of inflation rate differences between these two countries are in accordance with portfolio balance theory stating that investors possess information about the macroeconomic variables thereby readjusting their portfolios for effective diversification.

  5. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  6. Deficits in vision and visual attention associated with motor performance of very preterm/very low birth weight children.

    Science.gov (United States)

    Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G

    2016-01-01

    To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  8. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  9. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  10. THE RELEVANCE OF THE VISUAL ARTS CURRICULUM IN THE PREPARATION OF PRE-SERVICE VISUAL ARTS TEACHERS IN UGANDA

    Directory of Open Access Journals (Sweden)

    Julius Ssegantebuka

    2017-08-01

    Full Text Available The research examined the relevance of the visual arts curriculum content with the view of assessing the extent to which it equips pre-service visual arts teachers with the knowledge and skills required for effective teaching. The study adopted a descriptive case study design. Data were collected from three purposively selected National Teacher Colleges (NTCs, six tutors and 90 final year pre-service visual arts teachers participated in this study. The research findings showed that teacher education institutions are inadequately preparing pre-service visual arts teachers because of the gaps in the Visual Arts Curriculum (VAC used in NTCs. Some of these gaps are attributed to the structure of the visual arts curriculum tutors use in NTCs. The visual arts curriculum lacks explicit visual arts assessment strategies; it has wide and combined visual arts content to be covered within a short period of two years and the limited knowledge of the available art materials, tools and equipment. The research recommended the restructuring of the VAC to accommodate more practical; and the introduction of specialized knowledge in the visual arts education (VAE to enable tutors decipher practical knowledge from the theory studied so as to adopt an integrated approach in VAE curriculum.

  11. Infant visual attention and object recognition.

    Science.gov (United States)

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Combined effects of expectations and visual uncertainty upon detection and identification of a target in the fog.

    Science.gov (United States)

    Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial

    2015-09-01

    Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.

  13. Integrating Statistical Visualization Research into the Political Science Classroom

    Science.gov (United States)

    Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.

    2011-01-01

    The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…

  14. Combining computational analyses and interactive visualization for document exploration and sensemaking in jigsaw.

    Science.gov (United States)

    Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John

    2013-10-01

    Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.

  15. On the assessment of visual communication by information theory

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.

    1993-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  16. Professional Visual Basic 2010 and .NET 4

    CERN Document Server

    Sheldon, Bill; Sharkey, Kent

    2010-01-01

    Intermediate and advanced coverage of Visual Basic 2010 and .NET 4 for professional developers. If you've already covered the basics and want to dive deep into VB and .NET topics that professional programmers use most, this is your book. You'll find a quick review of introductory topics-always helpful-before the author team of experts moves you quickly into such topics as data access with ADO.NET, Language Integrated Query (LINQ), security, ASP.NET web programming with Visual Basic, Windows workflow, threading, and more. You'll explore all the new features of Visual Basic 2010 as well as all t

  17. Visualizing turbulent mixing of gases and particles

    Science.gov (United States)

    Ma, Kwan-Liu; Smith, Philip J.; Jain, Sandeep

    1995-01-01

    A physical model and interactive computer graphics techniques have been developed for the visualization of the basic physical process of stochastic dispersion and mixing from steady-state CFD calculations. The mixing of massless particles and inertial particles is visualized by transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Groups of particles are traced through the vector field for the mean path as well as their statistical dispersion about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of particles in a turbulent environment are traced, not just mean paths. In combustion simulations of many industrial processes, good mixing is required to achieve a sufficient degree of combustion efficiency. The ability to visualize this multiphase mixing can not only help identify poor mixing but also explain the mechanism for poor mixing. The information gained from the visualization can be used to improve the overall combustion efficiency in utility boilers or propulsion devices. We have used this technique to visualize steady-state simulations of the combustion performance in several furnace designs.

  18. Visually guided adjustments of body posture in the roll plane

    OpenAIRE

    Tarnutzer, A A; Bockisch, C J; Straumann, D

    2013-01-01

    Body position relative to gravity is continuously updated to prevent falls. Therefore, the brain integrates input from the otoliths, truncal graviceptors, proprioception and vision. Without visual cues estimated direction of gravity mainly depends on otolith input and becomes more variable with increasing roll-tilt. Contrary, the discrimination threshold for object orientation shows little modulation with varying roll orientation of the visual stimulus. Providing earth-stationary visual cues,...

  19. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  20. Quantitative and comparative visualization applied to cosmological simulations

    International Nuclear Information System (INIS)

    Ahrens, James; Heitmann, Katrin; Habib, Salman; Ankeny, Lee; McCormick, Patrick; Inman, Jeff; Armstrong, Ryan; Ma, Kwan-Liu

    2006-01-01

    Cosmological simulations follow the formation of nonlinear structure in dark and luminous matter. The associated simulation volumes and dynamic range are very large, making visualization both a necessary and challenging aspect of the analysis of these datasets. Our goal is to understand sources of inconsistency between different simulation codes that are started from the same initial conditions. Quantitative visualization supports the definition and reasoning about analytically defined features of interest. Comparative visualization supports the ability to visually study, side by side, multiple related visualizations of these simulations. For instance, a scientist can visually distinguish that there are fewer halos (localized lumps of tracer particles) in low-density regions for one simulation code out of a collection. This qualitative result will enable the scientist to develop a hypothesis, such as loss of halos in low-density regions due to limited resolution, to explain the inconsistency between the different simulations. Quantitative support then allows one to confirm or reject the hypothesis. If the hypothesis is rejected, this step may lead to new insights and a new hypothesis, not available from the purely qualitative analysis. We will present methods to significantly improve the Scientific analysis process by incorporating quantitative analysis as the driver for visualization. Aspects of this work are included as part of two visualization tools, ParaView, an open-source large data visualization tool, and Scout, an analysis-language based, hardware-accelerated visualization tool