WorldWideScience

Sample records for integral notam visualization

  1. INVESTIGATION OF THE DRAWBACKS OF THE CURRENT NOTAM SYSTEM

    Directory of Open Access Journals (Sweden)

    Mykola Bogunenko

    2013-10-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The article deals with the analysis of the current NOTAM system, investigation of the factors that limit the NOTAM format, analysis of such drawbacks as human interpretation, information overload by the end use, geographical inaccuracy, not-self contained and hidden applicability /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Обычная таблица"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  2. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  3. Aspects of ontology visualization and integration

    NARCIS (Netherlands)

    Dmitrieva, Joelia Borisovna

    2011-01-01

    In this thesis we will describe and discuss methodologies for ontology visualization and integration. Two visualization methods will be elaborated. In one method the ontology is visualized with the node-link technique, and with the other method the ontology is visualized with the containment

  4. Usability of EFBs for Viewing NOTAMs and AIS/MET Data Link Messages

    Science.gov (United States)

    Evans, Emory T.; Young, Steven D.; Daniels, Tammi S.; Myer, Robert R.

    2014-01-01

    Electronic Flight Bags (EFB) are increasingly integral to flight deck information management. A piloted simulation study was conducted at NASA Langley Research Center, one aspect of which was to evaluate the usability and acceptability of EFBs for viewing and managing Notices to Airmen (NOTAMs) and data linked aeronautical information services (AIS) and meteorological information (MET). The study simulated approaches and landings at Memphis International Airport (KMEM) using various flight scenarios and weather conditions. Ten two-pilot commercial airline crews participated, utilizing the Cockpit Motion Facility's Research Flight Deck (CMF/RFD) simulator. Each crew completed approximately two dozen flights over a two day period. Two EFBs were installed, one for each pilot. Study data were collected in the form of questionnaire/interview responses, audio/video recordings, oculometer recordings, and aircraft/system state data. Preliminary usability results are reported primarily based on pilot interviews and responses to questions focused on ease of learning, ease of use, usefulness, satisfaction, and acceptability. Analysis of the data from the other objective measures (e.g., oculometer) is ongoing and will be reported in a future publication. This paper covers how the EFB functionality was set up for the study; the NOTAM, AIS/MET data link, and weather messages that were presented; questionnaire results; selected pilot observations; and conclusions.

  5. Integrated Data Visualization and Virtual Reality Tool

    Science.gov (United States)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  6. Visual Learning in Application of Integration

    Science.gov (United States)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  7. Implicit integration in a case of integrative visual agnosia.

    Science.gov (United States)

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  8. Spatial integration in mouse primary visual cortex

    OpenAIRE

    Vaiceliunaite, Agne; Erisken, Sinem; Franzen, Florian; Katzner, Steffen; Busse, Laura

    2013-01-01

    Responses of many neurons in primary visual cortex (V1) are suppressed by stimuli exceeding the classical receptive field (RF), an important property that might underlie the computation of visual saliency. Traditionally, it has proven difficult to disentangle the underlying neural circuits, including feedforward, horizontal intracortical, and feedback connectivity. Since circuit-level analysis is particularly feasible in the mouse, we asked whether neural signatures of spatial integration in ...

  9. Learning STEM Through Integrative Visual Representations

    Science.gov (United States)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial

  10. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  11. Spatial integration in mouse primary visual cortex.

    Science.gov (United States)

    Vaiceliunaite, Agne; Erisken, Sinem; Franzen, Florian; Katzner, Steffen; Busse, Laura

    2013-08-01

    Responses of many neurons in primary visual cortex (V1) are suppressed by stimuli exceeding the classical receptive field (RF), an important property that might underlie the computation of visual saliency. Traditionally, it has proven difficult to disentangle the underlying neural circuits, including feedforward, horizontal intracortical, and feedback connectivity. Since circuit-level analysis is particularly feasible in the mouse, we asked whether neural signatures of spatial integration in mouse V1 are similar to those of higher-order mammals and investigated the role of parvalbumin-expressing (PV+) inhibitory interneurons. Analogous to what is known from primates and carnivores, we demonstrate that, in awake mice, surround suppression is present in the majority of V1 neurons and is strongest in superficial cortical layers. Anesthesia with isoflurane-urethane, however, profoundly affects spatial integration: it reduces the laminar dependency, decreases overall suppression strength, and alters the temporal dynamics of responses. We show that these effects of brain state can be parsimoniously explained by assuming that anesthesia affects contrast normalization. Hence, the full impact of suppressive influences in mouse V1 cannot be studied under anesthesia with isoflurane-urethane. To assess the neural circuits of spatial integration, we targeted PV+ interneurons using optogenetics. Optogenetic depolarization of PV+ interneurons was associated with increased RF size and decreased suppression in the recorded population, similar to effects of lowering stimulus contrast, suggesting that PV+ interneurons contribute to spatial integration by affecting overall stimulus drive. We conclude that the mouse is a promising model for circuit-level mechanisms of spatial integration, which relies on the combined activity of different types of inhibitory interneurons.

  12. Visual-motor integration functioning in a South African middle ...

    African Journals Online (AJOL)

    Visual-motor integration functioning has been identified as playing an integral role in different aspects of a child's development. Sensory-motor development is not only foundational to the physical maturation process, but is also imperative for progress with formal learning activities. Deficits in visual-motor integration have ...

  13. Integrated Visualization Environment for Science Mission Modeling, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work will provide NASA with an integrated visualization environment providing greater insight and a more intuitive representation of large technical...

  14. Visual Communication: Integrating Visual Instruction into Business Communication Courses

    Science.gov (United States)

    Baker, William H.

    2006-01-01

    Business communication courses are ideal for teaching visual communication principles and techniques. Many assignments lend themselves to graphic enrichment, such as flyers, handouts, slide shows, Web sites, and newsletters. Microsoft Publisher and Microsoft PowerPoint are excellent tools for these assignments, with Publisher being best for…

  15. Software attribute visualization for high integrity software

    Energy Technology Data Exchange (ETDEWEB)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  16. Exploring the Integration of Data Mining and Data Visualization

    Science.gov (United States)

    Zhang, Yi

    2011-01-01

    Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…

  17. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  18. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  19. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  20. Integration of today's digital state with tomorrow's visual environment

    Science.gov (United States)

    Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott

    1996-03-01

    New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.

  1. Visual Sample Plan (VSP) - FIELDS Integration

    Energy Technology Data Exchange (ETDEWEB)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Hassig, Nancy L.; Carlson, Deborah K.; Bing-Canar, John; Cooper, Brian; Roth, Chuck

    2003-04-19

    Two software packages, VSP 2.1 and FIELDS 3.5, are being used by environmental scientists to plan the number and type of samples required to meet project objectives, display those samples on maps, query a database of past sample results, produce spatial models of the data, and analyze the data in order to arrive at defensible decisions. VSP 2.0 is an interactive tool to calculate optimal sample size and optimal sample location based on user goals, risk tolerance, and variability in the environment and in lab methods. FIELDS 3.0 is a set of tools to explore the sample results in a variety of ways to make defensible decisions with quantified levels of risk and uncertainty. However, FIELDS 3.0 has a small sample design module. VSP 2.0, on the other hand, has over 20 sampling goals, allowing the user to input site-specific assumptions such as non-normality of sample results, separate variability between field and laboratory measurements, make two-sample comparisons, perform confidence interval estimation, use sequential search sampling methods, and much more. Over 1,000 copies of VSP are in use today. FIELDS is used in nine of the ten U.S. EPA regions, by state regulatory agencies, and most recently by several international countries. Both software packages have been peer-reviewed, enjoy broad usage, and have been accepted by regulatory agencies as well as site project managers as key tools to help collect data and make environmental cleanup decisions. Recently, the two software packages were integrated, allowing the user to take advantage of the many design options of VSP, and the analysis and modeling options of FIELDS. The transition between the two is simple for the user – VSP can be called from within FIELDS, automatically passing a map to VSP and automatically retrieving sample locations and design information when the user returns to FIELDS. This paper will describe the integration, give a demonstration of the integrated package, and give users download

  2. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform a......) that perceptual and memorial processes can be dissociated on both functional and anatomical grounds. No evidence was obtained for the involvement of the parietal lobes in the integration of single objects....

  3. NMDA receptor antagonist ketamine impairs feature integration in visual perception

    NARCIS (Netherlands)

    Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F

    2013-01-01

    Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground

  4. Integration and Visualization of Epigenome and Mobilome Data in Crops

    OpenAIRE

    Robakowska Hyzorek, Dagmara; Mirouze, Marie; Larmande, Pierre

    2016-01-01

    International audience; In the coming years, the study of the interaction between the epigenome and the mobilome is likely to give insights on the role of TEs on genome stability and evolution. In the present project we have created tools to collect epigenetic datasets from different laboratories and databases and translate them to a standard format to be integrated, analyzed and finally visualized.

  5. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  6. Visual-vestibular integration motion perception reporting

    Science.gov (United States)

    Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.

    1999-01-01

    Self-orientation and self/surround-motion perception derive from a multimodal sensory process that integrates information from the eyes, vestibular apparatus, proprioceptive and somatosensory receptors. Results from short and long duration spaceflight investigations indicate that: (1) perceptual and sensorimotor function was disrupted during the initial exposure to microgravity and gradually improved over hours to days (individuals adapt), (2) the presence and/or absence of information from different sensory modalities differentially affected the perception of orientation, self-motion and surround-motion, (3) perceptual and sensorimotor function was initially disrupted upon return to Earth-normal gravity and gradually recovered to preflight levels (individuals readapt), and (4) the longer the exposure to microgravity, the more complete the adaptation, the more profound the postflight disturbances, and the longer the recovery period to preflight levels. While much has been learned about perceptual and sensorimotor reactions and adaptation to microgravity, there is much remaining to be learned about the mechanisms underlying the adaptive changes, and about how intersensory interactions affect perceptual and sensorimotor function during voluntary movements. During space flight, SMS and perceptual disturbances have led to reductions in performance efficiency and sense of well-being. During entry and immediately after landing, such disturbances could have a serious impact on the ability of the commander to land the Orbiter and on the ability of all crew members to egress from the Orbiter, particularly in a non-nominal condition or following extended stays in microgravity. An understanding of spatial orientation and motion perception is essential for developing countermeasures for Space Motion Sickness (SMS) and perceptual disturbances during spaceflight and upon return to Earth. Countermeasures for optimal performance in flight and a successful return to Earth require

  7. Visual Data Analysis as an Integral Part of Environmental Management

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Bethel, E. Wes; Horsman, Jennifer L.; Hubbard, Susan S.; Krishnan, Harinarayan; Romosan,, Alexandru; Keating, Elizabeth H.; Monroe, Laura; Strelitz, Richard; Moore, Phil; Taylor, Glenn; Torkian, Ben; Johnson, Timothy C.; Gorton, Ian

    2012-10-01

    The U.S. Department of Energy's (DOE) Office of Environmental Management (DOE/EM) currently supports an effort to understand and predict the fate of nuclear contaminants and their transport in natural and engineered systems. Geologists, hydrologists, physicists and computer scientists are working together to create models of existing nuclear waste sites, to simulate their behavior and to extrapolate it into the future. We use visualization as an integral part in each step of this process. In the first step, visualization is used to verify model setup and to estimate critical parameters. High-performance computing simulations of contaminant transport produces massive amounts of data, which is then analyzed using visualization software specifically designed for parallel processing of large amounts of structured and unstructured data. Finally, simulation results are validated by comparing simulation results to measured current and historical field data. We describe in this article how visual analysis is used as an integral part of the decision-making process in the planning of ongoing and future treatment options for the contaminated nuclear waste sites. Lessons learned from visually analyzing our large-scale simulation runs will also have an impact on deciding on treatment measures for other contaminated sites.

  8. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  9. Turkish Preschool Teachers' Beliefs on Integrated Curriculum: Integration of Visual Arts with Other Activities

    Science.gov (United States)

    Ozturk, Elif; Erden, Feyza Tantekin

    2011-01-01

    This study investigates preschool teachers' beliefs about integrated curriculum and, more specifically, their beliefs about integration of visual arts with other activities. The participants of this study consisted of 255 female preschool teachers who are employed in preschools in Ankara, Turkey. For the study, teachers were asked to complete…

  10. Visual feature integration theory: past, present, and future.

    Science.gov (United States)

    Quinlan, Philip T

    2003-09-01

    Visual feature integration theory was one of the most influential theories of visual information processing in the last quarter of the 20th century. This article provides an exposition of the theory and a review of the associated data. In the past much emphasis has been placed on how the theory explains performance in various visual search tasks. The relevant literature is discussed and alternative accounts are described. Amendments to the theory are also set out. Many other issues concerning internal processes and representations implicated by the theory are reviewed. The article closes with a synopsis of what has been learned from consideration of the theory, and it is concluded that some of the issues may remain intractable unless appropriate neuroscientific investigations are carried out.

  11. Visualization and Integrated Data Mining of Disparate Information

    Energy Technology Data Exchange (ETDEWEB)

    Saffer, Jeffrey D.(OMNIVIZ, INC); Albright, Cory L.(BATTELLE (PACIFIC NW LAB)); Calapristi, Augustin J.(BATTELLE (PACIFIC NW LAB)); Chen, Guang (OMNIVIZ, INC); Crow, Vernon L.(BATTELLE (PACIFIC NW LAB)); Decker, Scott D.(BATTELLE (PACIFIC NW LAB)); Groch, Kevin M.(BATTELLE (PACIFIC NW LAB)); Havre, Susan L.(BATTELLE (PACIFIC NW LAB)); Malard, Joel (BATTELLE (PACIFIC NW LAB)); Martin, Tonya J.(BATTELLE (PACIFIC NW LAB)); Miller, Nancy E.(BATTELLE (PACIFIC NW LAB)); Monroe, Philip J.(OMNIVIZ, INC); Nowell, Lucy T.(BATTELLE (PACIFIC NW LAB)); Payne, Deborah A.(BATTELLE (PACIFIC NW LAB)); Reyes Spindola, Jorge F.(BATTELLE (PACIFIC NW LAB)); Scarberry, Randall E.(OMNIVIZ, INC); Sofia, Heidi J.(BATTELLE (PACIFIC NW LAB)); Stillwell, Lisa C.(OMNIVIZ, INC); Thomas, Gregory S.(BATTELLE (PACIFIC NW LAB)); Thurston, Sarah J.(OMNIVIZ, INC); Williams, Leigh K.(BATTELLE (PACIFIC NW LAB)); Zabriskie, Sean J.(OMNIVIZ, INC); MG Hicks

    2001-05-11

    The volumes and diversity of information in the discovery, development, and business processes within the chemical and life sciences industries require new approaches for analysis. Traditional list- or spreadsheet-based methods are easily overwhelmed by large amounts of data. Furthermore, generating strong hypotheses and, just as importantly, ruling out weak ones, requires integration across different experimental and informational sources. We have developed a framework for this integration, including common conceptual data models for multiple data types and linked visualizations that provide an overview of the entire data set, a measure of how each data record is related to every other record, and an assessment of the associations within the data set.

  12. Pathview Web: user friendly pathway visualization and data integration.

    Science.gov (United States)

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Integrating mechanisms of visual guidance in naturalistic language production.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  14. Treelink: data integration, clustering and visualization of phylogenetic trees.

    Science.gov (United States)

    Allende, Christian; Sohn, Erik; Little, Cedric

    2015-12-29

    Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .

  15. Deficit in visual temporal integration in autism spectrum disorders.

    Science.gov (United States)

    Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru

    2010-04-07

    Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.

  16. A Visual Interface Diagram For Mapping Functions In Integrated Products

    DEFF Research Database (Denmark)

    Ingerslev, Mattias; Oliver Jespersen, Mikkel; Göhler, Simon Moritz

    2015-01-01

    In product development there is a recognized tendency towards increased functionality for each new product generation. This leads to more integrated and complex products, with the risk of development delays and quality issues as a consequence of lacking overview and transparency. The work described...... of visualizing relations between parts and functions in highly integrated mechanical products. The result is an interface diagram that supports design teams in communication, decision making and design management. The diagram gives the designer an overview of the couplings and dependencies within a product...... in this article has been conducted in collaboration with Novo Nordisk on the insulin injection device FlexTouch® as case product. The FlexTouch® reflects the characteristics of an integrated product with several functions shared between a relatively low number of parts. In this article we present a novel way...

  17. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    International Nuclear Information System (INIS)

    Xu, C; Xie, Q; Li, S; Wang, D

    2014-01-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products – collected through research groups, monitoring stations and observation cruises – and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment

  18. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  19. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  20. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    Science.gov (United States)

    Xu, C.; Li, S.; Wang, D.; Xie, Q.

    2014-02-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

  1. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  2. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    Science.gov (United States)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  3. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    Science.gov (United States)

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  5. NMDA receptor antagonist ketamine impairs feature integration in visual perception.

    Science.gov (United States)

    Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F

    2013-01-01

    Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.

  6. NMDA receptor antagonist ketamine impairs feature integration in visual perception.

    Directory of Open Access Journals (Sweden)

    Julia D I Meuwese

    Full Text Available Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.

  7. Property Integration: Componentless Design Techniques and Visualization Tools

    DEFF Research Database (Denmark)

    El-Halwagi, Mahmoud M; Glasgow, I.M.; Eden, Mario Richard

    2004-01-01

    integration is defined as a functionality-based, holistic approach to the allocation and manipulation of streams and processing units, which is based on tracking, adjusting, assigning, and matching functionalities throughout the process. Revised lever arm rules are devised to allow optimal allocation while...... maintaining intra- and interstream conservation of the property-based clusters. The property integration problem is mapped into the cluster domain. This dual problem is solved in terms of clusters and then mapped to the primal problem in the property domain. Several new rules are derived for graphical...... techniques. Particularly, systematic rules and visualization techniques for the identification of optimal mixing of streams and their allocation to units. Furthermore, a derivation of the correspondence between clustering arms and fractional contribution of streams is presented. This correspondence...

  8. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  9. An Integrated Biomechanical Model for Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2012-01-01

    When gravitational unloading occurs upon entry to space, astronauts experience a major shift in the distribution of their bodily fluids, with a net headward movement. Measurements have shown that intraocular pressure spikes, and there is a strong suspicion that intracranial pressure also rises. Some astronauts in both short- and long-duration spaceflight develop visual acuity changes, which may or may not reverse upon return to earth gravity. To date, of the 36 U.S. astronauts who have participated in long-duration space missions on the International Space Station, 15 crew members have developed minor to severe visual decrements and anatomical changes. These ophthalmic changes include hyperopic shift, optic nerve distension, optic disc edema, globe flattening, choroidal folds, and elevated cerebrospinal fluid pressure. In order to understand the physical mechanisms behind these phenomena, NASA is developing an integrated model that appropriately captures whole-body fluids transport through lumped-parameter models for the cerebrospinal and cardiovascular systems. This data feeds into a finite element model for the ocular globe and retrobulbar subarachnoid space through time-dependent boundary conditions. Although tissue models and finite element representations of the corneo-scleral shell, retina, choroid and optic nerve head have been integrated to study pathological conditions such as glaucoma, the retrobulbar subarachnoid space behind the eye has received much less attention. This presentation will describe the development and scientific foundation of our holistic model.

  10. Integrative real-time geographic visualization of energy resources

    International Nuclear Information System (INIS)

    Sorokine, A.; Shankar, M.; Stovall, J.; Bhaduri, B.; King, T.; Fernandez, S.; Datar, N.; Omitaomu, O.

    2009-01-01

    'Full text:' Several models forecast that climatic changes will increase the frequency of disastrous events like droughts, hurricanes, and snow storms. Responding to these events and also to power outages caused by system errors such as the 2003 North American blackout require an interconnect-wide real-time monitoring system for various energy resources. Such a system should be capable of providing situational awareness to its users in the government and energy utilities by dynamically visualizing the status of the elements of the energy grid infrastructure and supply chain in geographic contexts. We demonstrate an approach that relies on Google Earth and similar standard-based platforms as client-side geographic viewers with a data-dependent server component. The users of the system can view status information in spatial and temporal contexts. These data can be integrated with a wide range of geographic sources including all standard Google Earth layers and a large number of energy and environmental data feeds. In addition, we show a real-time spatio-temporal data sharing capability across the users of the system, novel methods for visualizing dynamic network data, and a fine-grain access to very large multi-resolution geographic datasets for faster delivery of the data. The system can be extended to integrate contingency analysis results and other grid models to assess recovery and repair scenarios in the case of major disruption. (author)

  11. Collinear facilitation and contour integration in autism: evidence for atypical visual integration.

    Science.gov (United States)

    Jachim, Stephen; Warren, Paul A; McLoughlin, Niall; Gowen, Emma

    2015-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impaired social interaction, atypical communication and a restricted repertoire of interests and activities. Altered sensory and perceptual experiences are also common, and a notable perceptual difference between individuals with ASD and controls is their superior performance in visual tasks where it may be beneficial to ignore global context. This superiority may be the result of atypical integrative processing. To explore this claim we investigated visual integration in adults with ASD (diagnosed with Asperger's Syndrome) using two psychophysical tasks thought to rely on integrative processing-collinear facilitation and contour integration. We measured collinear facilitation at different flanker orientation offsets and contour integration for both open and closed contours. Our results indicate that compared to matched controls, ASD participants show (i) reduced collinear facilitation, despite equivalent performance without flankers; and (ii) less benefit from closed contours in contour integration. These results indicate weaker visuospatial integration in adults with ASD and suggest that further studies using these types of paradigms would provide knowledge on how contextual processing is altered in ASD.

  12. Collinear facilitation and contour integration in autism: evidence for atypical visual integration

    Directory of Open Access Journals (Sweden)

    Stephen eJachim

    2015-03-01

    Full Text Available Autism spectrum disorder (ASD is a neurodevelopmental disorder characterized by impaired social interaction, atypical communication and a restricted repertoire of interests and activities. Altered sensory and perceptual experiences are also common, and a notable perceptual difference between individuals with ASD and controls is their superior performance in visual tasks where it may be beneficial to ignore global context. This superiority may be the result of atypical integrative processing. To explore this claim we investigated visual integration in adults with ASD (diagnosed with Asperger’s Syndrome using two psychophysical tasks thought to rely on integrative processing - collinear facilitation and contour integration. We measured collinear facilitation at different flanker orientation offsets and contour integration for both open and closed contours. Our results indicate that compared to matched controls, ASD participants show (i reduced collinear facilitation, despite equivalent performance without flankers and (ii less benefit from closed contours in contour integration. These results indicate weaker visuospatial integration in adults with ASD and suggest that further studies using these types of paradigms would provide knowledge on how contextual processing is altered in ASD.

  13. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  14. Cognitive and Developmental Influences in Visual-Motor Integration Skills in Young Children

    Science.gov (United States)

    Decker, Scott L.; Englund, Julia A.; Carboni, Jessica A.; Brooks, Janell H.

    2011-01-01

    Measures of visual-motor integration skills continue to be widely used in psychological assessments with children. However, the construct validity of many visual-motor integration measures remains unclear. In this study, we investigated the relative contributions of maturation and cognitive skills to the development of visual-motor integration…

  15. Collinear integration affects visual search at V1.

    Science.gov (United States)

    Chow, Hiu Mei; Jingling, Li; Tseng, Chia-huei

    2013-08-29

    Perceptual grouping plays an indispensable role in figure-ground segregation and attention distribution. For example, a column pops out if it contains element bars orthogonal to uniformly oriented element bars. Jingling and Tseng (2013) have reported that contextual grouping in a column matters to visual search behavior: When a column is grouped into a collinear (snakelike) structure, a target positioned on it became harder to detect than on other noncollinear (ladderlike) columns. How and where perceptual grouping interferes with selective attention is still largely unknown. This article contributes to this little-studied area by asking whether collinear contour integration interacts with visual search before or after binocular fusion. We first identified that the previously mentioned search impairment occurs with a distractor of five or nine elements but not one element in a 9 × 9 search display. To pinpoint the site of this effect, we presented the search display with a short collinear bar (one element) to one eye and the extending collinear bars to the other eye, such that when properly fused, the combined binocular collinear length (nine elements) exceeded the critical length. No collinear search impairment was observed, implying that collinear information before binocular fusion shaped participants' search behavior, although contour extension from the other eye after binocular fusion enhanced the effect of collinearity on attention. Our results suggest that attention interacts with perceptual grouping as early as V1.

  16. Integrated visualization of remote sensing data using Google Earth

    Science.gov (United States)

    Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.

    2009-09-01

    The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type

  17. Integration of Visual and Vestibular Information Used to Discriminate Rotational Self-Motion

    Directory of Open Access Journals (Sweden)

    Florian Soyka

    2011-10-01

    Full Text Available Do humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli? Recent studies are inconclusive as to whether such integration occurs when discriminating heading direction. In the present study eight participants were consecutively rotated twice (2s sinusoidal acceleration on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The visual stimulus was a video of a moving stripe pattern, synchronized with the inertial motion. Peak acceleration of the reference stimulus was varied and participants reported which rotation was perceived as faster. Just-noticeable differences (JND were estimated by fitting psychometric functions. The visual-vestibular JND measurements are too high compared to the predictions based on the unimodal JND estimates and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.

  18. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 4 : use of knowledge integrated visual analytics system in supporting bridge management.

    Science.gov (United States)

    2009-12-01

    The goals of integration should be: Supporting domain oriented data analysis through the use of : knowledge augmented visual analytics system. In this project, we focus on: : Providing interactive data exploration for bridge managements. : ...

  19. The relationship between better-eye and integrated visual field mean deviation and visual disability.

    Science.gov (United States)

    Arora, Karun S; Boland, Michael V; Friedman, David S; Jefferys, Joan L; West, Sheila K; Ramulu, Pradeep Y

    2013-12-01

    To determine the extent of difference between better-eye visual field (VF) mean deviation (MD) and integrated VF (IVF) MD among Salisbury Eye Evaluation (SEE) subjects and a larger group of glaucoma clinic subjects and to assess how those measures relate to objective and subjective measures of ability/performance in SEE subjects. Retrospective analysis of population- and clinic-based samples of adults. A total of 490 SEE and 7053 glaucoma clinic subjects with VF loss (MD ≤-3 decibels [dB] in at least 1 eye). Visual field testing was performed in each eye, and IVF MD was calculated. Differences between better-eye and IVF MD were calculated for SEE and clinic-based subjects. In SEE subjects with VF loss, models were constructed to compare the relative impact of better-eye and IVF MD on driving habits, mobility, self-reported vision-related function, and reading speed. Difference between better-eye and IVF MD and relationship of better-eye and IVF MD with performance measures. The median difference between better-eye and IVF MD was 0.41 dB (interquartile range [IQR], -0.21 to 1.04 dB) and 0.72 dB (IQR, 0.04-1.45 dB) for SEE subjects and clinic-based patients with glaucoma, respectively, with differences of ≥ 2 dB between the 2 MDs observed in 9% and 18% of the groups, respectively. Among SEE subjects with VF loss, both MDs demonstrated similar associations with multiple ability and performance metrics as judged by the presence/absence of a statistically significant association between the MD and the metric, the magnitude of observed associations (odds ratios, rate ratios, or regression coefficients associated with 5-dB decrements in MD), and the extent of variability in the metric explained by the model (R(2)). Similar associations of similar magnitude also were noted for the subgroup of subjects with glaucoma and subjects in whom better-eye and IVF MD differed by ≥ 2 dB. The IVF MD rarely differs from better-eye MD, and similar associations between VF loss and

  20. Visualization of the Eastern Renewable Generation Integration Study: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron

    2016-12-01

    The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.

  1. SEURAT: visual analytics for the integrated analysis of microarray data.

    Science.gov (United States)

    Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony

    2010-06-03

    In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.

  2. SEURAT: Visual analytics for the integrated analysis of microarray data

    Directory of Open Access Journals (Sweden)

    Bullinger Lars

    2010-06-01

    Full Text Available Abstract Background In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. Results We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. Conclusions The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.

  3. An Investigation of Visual Contour Integration Ability in Relation to Writing Performance in Primary School Students

    Science.gov (United States)

    Li-Tsang, Cecilia W. P.; Wong, Agnes S. K.; Chan, Jackson Y.; Lee, Amos Y. T.; Lam, Miko C. Y.; Wong, C. W.; Lu, Zhonglin

    2012-01-01

    A previous study found a visual deficit in contour integration in English readers with dyslexia (Simmers & Bex, 2001). Visual contour integration may play an even more significant role in Chinese handwriting particularly due to its logographic presentation (Lam, Au, Leung, & Li-Tsang, 2011). The current study examined the relationship…

  4. A Motor-Skills Programme to Enhance Visual Motor Integration of Selected Pre-School Learners

    Science.gov (United States)

    Africa, Eileen K.; van Deventer, Karel J.

    2017-01-01

    Pre-schoolers are in a window period for motor skill development. Visual-motor integration (VMI) is the foundation for academic and sport skills. Therefore, it must develop before formal schooling. This study attempted to improve VMI skills. VMI skills were measured with the "Beery-Buktenica developmental test of visual-motor integration 6th…

  5. Enhancing creative problem solving in an integrated visual art and geometry program: A pilot study

    NARCIS (Netherlands)

    Schoevers, E.M.; Kroesbergen, E.H.; Pitta-Pantazi, D.

    2017-01-01

    This article describes a new pedagogical method, an integrated visual art and geometry program, which has the aim to increase primary school students' creative problem solving and geometrical ability. This paper presents the rationale for integrating visual art and geometry education. Furthermore

  6. Predictors of Visual-Motor Integration in Children with Intellectual Disability

    Science.gov (United States)

    Memisevic, Haris; Sinanovic, Osman

    2012-01-01

    The aim of this study was to assess the influence of sex, age, level and etiology of intellectual disability on visual-motor integration in children with intellectual disability. The sample consisted of 90 children with intellectual disability between 7 and 15 years of age. Visual-motor integration was measured using the Acadia test of…

  7. Keeping in Touch With the Visual System: Spatial Alignment and Multisensory Integration of Visual-Somatosensory Inputs

    Directory of Open Access Journals (Sweden)

    Jeannette Rose Mahoney

    2015-08-01

    Full Text Available Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms. In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.

  8. Executive functions as predictors of visual-motor integration in children with intellectual disability.

    Science.gov (United States)

    Memisevic, Haris; Sinanovic, Osman

    2013-12-01

    The goal of this study was to assess the relationship between visual-motor integration and executive functions, and in particular, the extent to which executive functions can predict visual-motor integration skills in children with intellectual disability. The sample consisted of 90 children (54 boys, 36 girls; M age = 11.3 yr., SD = 2.7, range 7-15) with intellectual disabilities of various etiologies. The measure of executive functions were 8 subscales of the Behavioral Rating Inventory of Executive Function (BRIEF) consisting of Inhibition, Shifting, Emotional Control, Initiating, Working memory, Planning, Organization of material, and Monitoring. Visual-motor integration was measured with the Acadia test of visual-motor integration (VMI). Regression analysis revealed that BRIEF subscales explained 38% of the variance in VMI scores. Of all the BRIEF subscales, only two were statistically significant predictors of visual-motor integration: Working memory and Monitoring. Possible implications of this finding are further elaborated.

  9. Evidence for optimal integration of visual feature representations across saccades

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  10. Visual updating across saccades by working memory integration

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing, by shifting externally-stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  11. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Exploring the Link between Visual Perception, Visual-Motor Integration, and Reading in Normal Developing and Impaired Children using DTVP-2.

    Science.gov (United States)

    Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie

    2017-08-01

    Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  15. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    Science.gov (United States)

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  16. Parallel development of contour integration and visual contrast sensitivity at low spatial frequencies

    DEFF Research Database (Denmark)

    Benedek, Krisztina; Janáky, Márta; Braunitzer, Gábor

    2010-01-01

    It has been suggested that visual contrast sensitivity and contour integration functions exhibit a late maturation during adolescence. However, the relationship between these functions has not been investigated. The aim of this study was to assess the development of visual contrast sensitivity...

  17. Integration of visual and inertial cues in perceived heading of self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Weesie, H.M.; Werkhoven, P.J.; Groen, E.L.

    2010-01-01

    In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants

  18. Integration of visual and inertial cues in the perception of angular self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Soyka, F.; Barnett-Cowan, M.; Bülthoff, H.H.; Groen, E.L.; Werkhoven, P.J.

    2013-01-01

    The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when

  19. Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit

    Science.gov (United States)

    Chiu, Jennifer L.; Linn, Marcia C.

    2014-01-01

    This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…

  20. DEVELOPMENT OF FINE MOTOR COORDINATION AND VISUAL-MOTOR INTEGRATION IN PRESCHOOL CHILDREN

    OpenAIRE

    MEMISEVIC Haris; HADZIC Selmir

    2015-01-01

    Fine motor skills are prerequisite for many everyday activities and they are a good predictor of a child's later academic outcome. The goal of the present study was to assess the effects of age on the development of fine motor coordination and visual-motor integration in preschool children. The sample for this study consisted of 276 preschool children from Canton Sara­jevo, Bosnia and Herzegovina. We assessed children's motor skills with Beery Visual Motor Integration Test and Lafayette Pegbo...

  1. Integrating Statistical Visualization Research into the Political Science Classroom

    Science.gov (United States)

    Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.

    2011-01-01

    The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…

  2. The Integration of Visual Expression in Music Education for Children

    Science.gov (United States)

    Roels, Johanna Maria; Van Petegem, Peter

    2014-01-01

    This study is the result of a two-year experimental collaboration with children from my piano class. Together, the children and I designed a method that uses visual expression as a starting point for composing and visualising music-theoretical concepts. In this method various dimensions of musicality such as listening, creating, noting down and…

  3. Integrating 3D Visualization and GIS in Planning Education

    Science.gov (United States)

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  4. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    Science.gov (United States)

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  5. SVIP-N 1.0: An integrated visualization platform for neutronics analysis

    International Nuclear Information System (INIS)

    Luo Yuetong; Long Pengcheng; Wu Guoyong; Zeng Qin; Hu Liqin; Zou Jun

    2010-01-01

    Post-processing is an important part of neutronics analysis, and SVIP-N 1.0 (scientific visualization integrated platform for neutronics analysis) is designed to ease post-processing of neutronics analysis through visualization technologies. Main capabilities of SVIP-N 1.0 include: (1) ability of manage neutronics analysis result; (2) ability to preprocess neutronics analysis result; (3) ability to visualization neutronics analysis result data in different way. The paper describes the system architecture and main features of SVIP-N, some advanced visualization used in SVIP-N 1.0 and some preliminary applications, such as ITER.

  6. Integration of Multiple Cues for Visual Gloss Evaluation

    OpenAIRE

    Leloup, Frédéric B.; Hanselaer, Peter; Pointer, Michael R.; Dutré, Philip

    2012-01-01

    This study reports on a psychophysical experiment with real stimuli that differ in multiple visual gloss criteria. Four samples were presented to 15 observers under different conditions of illumination, resulting in a series of 16 stimuli. Through pairwise comparisons, a gloss scale was derived and the observers' strategy to evaluate gloss was investigated. The preference probability matrix P indicated a dichotomy among observers. A first group of observers used the distinctnes...

  7. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  8. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  9. Integrated Visualization Environment for Science Mission Modeling, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is emphasizing the use of larger, more integrated models in conjunction with systems engineering tools and decision support systems. These tools place a...

  10. Structural Integrity Inspection and Visualization System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Based on the successful feasibility demonstration in Phase I, Physical Optics Corporation (POC) proposes to continue the development of a novel Structural Integrity...

  11. Can Cultural Behavior Have a Negative Impact on the Development of Visual Integration Pathways?

    Science.gov (United States)

    Pretorius, E.; Naude, H.; van Vuuren, C. J.

    2002-01-01

    Contends that cultural practices such as carrying the baby on the mother's back for prolonged periods can impact negatively on development of visual integration during the sensorimotor stage pathways by preventing adequate or enough crawling. Maintains that crawling is essential for cross- modality integration and that higher mental functions may…

  12. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  13. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    Science.gov (United States)

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  14. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  15. Object integration requires attention: visual search for Kanizsa figures in parietal extinction

    OpenAIRE

    Gögler, N.; Finke, K.; Keller, I.; Muller, Hermann J.; Conci, M.

    2016-01-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective att...

  16. Visualizing Cloud Properties and Satellite Imagery: A Tool for Visualization and Information Integration

    Science.gov (United States)

    Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.

    2017-12-01

    Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  17. Asymmetric Temporal Integration of Layer 4 and Layer 2/3 Inputs in Visual Cortex

    OpenAIRE

    Hang, Giao B.; Dan, Yang

    2010-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices...

  18. Visual Cycle Modulation as an Approach toward Preservation of Retinal Integrity

    OpenAIRE

    Bavik, Claes; Henry, Susan Hayes; Zhang, Yan; Mitts, Kyoko; McGinn, Tim; Budzynski, Ewa; Pashko, Andriy; Lieu, Kuo Lee; Zhong, Sheng; Blumberg, Bruce; Kuksa, Vladimir; Orme, Mark; Scott, Ian; Fawzi, Ahmad; Kubota, Ryo

    2015-01-01

    © 2015 Bavik et al. Increased exposure to blue or visible light, fluctuations in oxygen tension, and the excessive accumulation of toxic retinoid byproducts places a tremendous amount of stress on the retina. Reduction of visual chromophore biosynthesis may be an effective method to reduce the impact of these stressors and preserve retinal integrity. A class of non-retinoid, small molecule compounds that target key proteins of the visual cycle have been developed. The first candidate in this ...

  19. Four-dimensional Microscope-Integrated Optical Coherence Tomography to Visualize Suture Depth in Strabismus Surgery.

    Science.gov (United States)

    Pasricha, Neel D; Bhullar, Paramjit K; Shieh, Christine; Carrasco-Zevallos, Oscar M; Keller, Brenton; Izatt, Joseph A; Toth, Cynthia A; Freedman, Sharon F; Kuo, Anthony N

    2017-02-14

    The authors report the use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT), capable of live four-dimensional (three-dimensional across time) intraoperative imaging, to directly visualize suture depth during lateral rectus resection. Key surgical steps visualized in this report included needle depth during partial and full-thickness muscle passes along with scleral passes. [J Pediatr Ophthalmol Strabismus. 2017;54:e1-e5.]. Copyright 2017, SLACK Incorporated.

  20. Integrated Visualization of Multi-sensor Ocean Data across the Web

    Science.gov (United States)

    Platt, F.; Thompson, C. K.; Roberts, J. T.; Tsontos, V. M.; Hin Lam, C.; Arms, S. C.; Quach, N.

    2017-12-01

    Whether for research or operational decision support, oceanographic applications rely on the visualization of multivariate in situ and remote sensing data as an integral part of analysis workflows. However, given their inherently 3D-spatial and temporally dynamic nature, the visual representation of marine in situ data in particular poses a challenge. The Oceanographic In situ data Interoperability Project (OIIP) is a collaborative project funded under the NASA/ACCESS program that seeks to leverage and enhance higher TRL (technology readiness level) informatics technologies to address key data interoperability and integration issues associated with in situ ocean data, including the dearth of effective web-based visualization solutions. Existing web tools for the visualization of key in situ data types - point, profile, trajectory series - are limited in their support for integrated, dynamic and coordinated views of the spatiotemporal characteristics of the data. Via the extension of the JPL Common Mapping Client (CMC) software framework, OIIP seeks to provide improved visualization support for oceanographic in situ data sets. More specifically, this entails improved representation of both horizontal and vertical aspects of these data, which inherently are depth resolved and time referenced, as well as the visual synchronization with relevant remotely-sensed gridded data products, such as sea surface temperature and salinity. Electronic tagging datasets, which are a focal use case for OIIP, provide a representative, if somewhat complex, visualization challenge in this regard. Critical to the achievement of these development objectives has been compilation of a well-rounded set of visualization use cases and requirements based on a series of end-user consultations aimed at understanding their satellite-in situ visualization needs. Here we summarize progress on aspects of the technical work and our approach.

  1. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  2. A framework for interactive visual analysis of heterogeneous marine data in an integrated problem solving environment

    Science.gov (United States)

    Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei

    2017-07-01

    This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.

  3. Comparison of Syllabi and Inclusion of Recommendations for Interdisciplinary Integration of Visual Arts Contents

    Directory of Open Access Journals (Sweden)

    Eda Birsa

    2017-09-01

    Full Text Available We applied qualitative analysis to the syllabi of all subjects from the 1st up to the 5th grade of basic school in Slovenia in order to find out in what ways they contain recommendations for interdisciplinary integration. We classified them into three categories: references to subjects, implicit references, and explicit references. The classification into these categories has shown that certain concepts foreseen for integration with visual arts education in individual subjects for a certain grade or for a particular educational cycle cannot be found in the visual arts syllabus.

  4. Visual integration dysfunction in schizophrenia arises by the first psychotic episode and worsens with illness duration

    OpenAIRE

    Keane, Brian P.; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M.

    2016-01-01

    Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on th...

  5. Visual integration enhances associative memory equally for young and older adults without reducing hippocampal encoding activation.

    Science.gov (United States)

    Memel, Molly; Ryan, Lee

    2017-06-01

    The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was

  6. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    Science.gov (United States)

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Visual Criterion for Understanding the Notion of Convergence if Integrals in One Parameter

    Science.gov (United States)

    Alves, Francisco Regis Vieira

    2014-01-01

    Admittedly, the notion of generalized integrals in one parameter has a fundamental role. En virtue that, in this paper, we discuss and characterize an approach for to promote the visualization of this scientific mathematical concept. We still indicate the possibilities of graphical interpretation of formal properties related to notion of…

  8. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  9. Visual-Motor Integration in Children with Prader-Willi Syndrome

    Science.gov (United States)

    Lo, S. T.; Collin, P. J. L.; Hokken-Koelega, A. C. S.

    2015-01-01

    Background: Prader-Willi syndrome (PWS) is characterised by hypotonia, hypogonadism, short stature, obesity, behavioural problems, intellectual disability, and delay in language, social and motor development. There is very limited knowledge about visual-motor integration in children with PWS. Method: Seventy-three children with PWS aged 7-17 years…

  10. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  11. Integration of Visual and Proprioceptive Limb Position Information in Human Posterior Parietal, Premotor, and Extrastriate Cortex.

    Science.gov (United States)

    Limanowski, Jakub; Blankenburg, Felix

    2016-03-02

    The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual

  12. MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.

    Science.gov (United States)

    Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk

    2016-03-18

    Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .

  13. A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.

    Science.gov (United States)

    Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E

    2018-06-20

    Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  15. FACILITATING INTEGRATED SPATIO-TEMPORAL VISUALIZATION AND ANALYSIS OF HETEROGENEOUS ARCHAEOLOGICAL AND PALAEOENVIRONMENTAL RESEARCH DATA

    Directory of Open Access Journals (Sweden)

    C. Willmes

    2012-07-01

    Full Text Available In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806, a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database, a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  16. Causes of blindness and visual impairment among students in integrated schools for the blind in Nepal.

    Science.gov (United States)

    Shrestha, Jyoti Baba; Gnyawali, Subodh; Upadhyay, Madan Prasad

    2012-12-01

    To identify the causes of blindness and visual impairment among students in integrated schools for the blind in Nepal. A total of 778 students from all 67 integrated schools for the blind in Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision during the study period of 3 years. Among 831 students enrolled in the schools, 778 (93.6%) participated in the study. Mean age of students examined was 13.7 years, and the male to female ratio was 1.4:1. Among the students examined, 85.9% were blind, 10% had severe visual impairment and 4.1% were visually impaired. The cornea (22.8%) was the most common anatomical site of visual impairment, its most frequent cause being vitamin A deficiency, followed by the retina (18.4%) and lens (17.6%). Hereditary and childhood factors were responsible for visual loss in 27.9% and 22.0% of students, respectively. Etiology could not be determined in 46% of cases. Overall, 40.9% of students had avoidable causes of visual loss. Vision could be improved to a level better than 6/60 in 3.6% of students refracted. More than one third of students were visually impaired for potentially avoidable reasons, indicating lack of eye health awareness and eye care services in the community. The cause of visual impairment remained unknown in a large number of students, which indicates the need for introduction of modern diagnostic tools.

  17. The Effect of a Computerized Visual Perception and Visual-Motor Integration Training Program on Improving Chinese Handwriting of Children with Handwriting Difficulties

    Science.gov (United States)

    Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.

    2010-01-01

    This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…

  18. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,' ' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  19. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    Science.gov (United States)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  20. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.

    Science.gov (United States)

    Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P

    2013-03-01

    Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.

  1. Visualizing Volume to Help Students Understand the Disk Method on Calculus Integral Course

    Science.gov (United States)

    Tasman, F.; Ahmad, D.

    2018-04-01

    Many research shown that students have difficulty in understanding the concepts of integral calculus. Therefore this research is interested in designing a classroom activity integrated with design research method to assist students in understanding the integrals concept especially in calculating the volume of rotary objects using disc method. In order to support student development in understanding integral concepts, this research tries to use realistic mathematical approach by integrating geogebra software. First year university student who takes a calculus course (approximately 30 people) was chosen to implement the classroom activity that has been designed. The results of retrospective analysis show that visualizing volume of rotary objects using geogebra software can assist the student in understanding the disc method as one way of calculating the volume of a rotary object.

  2. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.

    Science.gov (United States)

    Mader, Malte; Simon, Ronald; Kurtz, Stefan

    2014-03-31

    A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.

  3. VarB Plus: An Integrated Tool for Visualization of Genome Variation Datasets

    KAUST Repository

    Hidayah, Lailatul

    2012-07-01

    Research on genomic sequences has been improving significantly as more advanced technology for sequencing has been developed. This opens enormous opportunities for sequence analysis. Various analytical tools have been built for purposes such as sequence assembly, read alignments, genome browsing, comparative genomics, and visualization. From the visualization perspective, there is an increasing trend towards use of large-scale computation. However, more than power is required to produce an informative image. This is a challenge that we address by providing several ways of representing biological data in order to advance the inference endeavors of biologists. This thesis focuses on visualization of variations found in genomic sequences. We develop several visualization functions and embed them in an existing variation visualization tool as extensions. The tool we improved is named VarB, hence the nomenclature for our enhancement is VarB Plus. To the best of our knowledge, besides VarB, there is no tool that provides the capability of dynamic visualization of genome variation datasets as well as statistical analysis. Dynamic visualization allows users to toggle different parameters on and off and see the results on the fly. The statistical analysis includes Fixation Index, Relative Variant Density, and Tajima’s D. Hence we focused our efforts on this tool. The scope of our work includes plots of per-base genome coverage, Principal Coordinate Analysis (PCoA), integration with a read alignment viewer named LookSeq, and visualization of geo-biological data. In addition to description of embedded functionalities, significance, and limitations, future improvements are discussed. The result is four extensions embedded successfully in the original tool, which is built on the Qt framework in C++. Hence it is portable to numerous platforms. Our extensions have shown acceptable execution time in a beta testing with various high-volume published datasets, as well as positive

  4. Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.

    Science.gov (United States)

    Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce

    2017-10-01

    Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.

  5. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    Science.gov (United States)

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Higher integrity of the motor and visual pathways in long-term video game players.

    Science.gov (United States)

    Zhang, Yang; Du, Guijin; Yang, Yongxin; Qin, Wen; Li, Xiaodong; Zhang, Quan

    2015-01-01

    Long term video game players (VGPs) exhibit superior visual and motor skills compared with non-video game control subjects (NVGCs). However, the neural basis underlying the enhanced behavioral performance remains largely unknown. To clarify this issue, the present study compared the whiter matter integrity within the corticospinal tracts (CST), the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus (ILF), and the inferior fronto-occipital fasciculus (IFOF) between the VGPs and the NVGCs using diffusion tensor imaging. Compared with the NVGCs, voxel-wise comparisons revealed significantly higher fractional anisotropy (FA) values in some regions within the left CST, left SLF, bilateral ILF, and IFOF in VGPs. Furthermore, higher FA values in the left CST at the level of cerebral peduncle predicted a faster response in visual attention tasks. These results suggest that higher white matter integrity in the motor and higher-tier visual pathways is associated with long-term video game playing, which may contribute to the understanding on how video game play influences motor and visual performance.

  7. The effect of integration masking on visual processing in perceptual categorization.

    Science.gov (United States)

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Food recognition and recipe analysis: integrating visual content, context and external knowledge

    OpenAIRE

    Herranz, Luis; Min, Weiqing; Jiang, Shuqiang

    2018-01-01

    The central role of food in our individual and social life, combined with recent technological advances, has motivated a growing interest in applications that help to better monitor dietary habits as well as the exploration and retrieval of food-related information. We review how visual content, context and external knowledge can be integrated effectively into food-oriented applications, with special focus on recipe analysis and retrieval, food recommendation, and the restaurant context as em...

  9. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  10. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  11. Helping To Integrate The Visually Challenged Into Mainstream Society Through A Low-Cost Braille Device

    Directory of Open Access Journals (Sweden)

    Desirée Jordan

    2013-06-01

    Full Text Available The visually challenged are often alienated from mainstream society because of their disabilities. This problem is even more pronounced in developing countries which often do not have the resources necessary to integrate this people group into their communities or even help them to become independent. It should therefore be the aim of governments in developing countries to provide this vulnerable people group with access to assistive technologies at a low cost. This paper describes an ongoing project that aims to provide low-cost assistive technologies to the visually challenged in Barbados. As a part of this project a study was conducted on a sample of visually challenged members of the Barbados Association for the Blind and Deaf to determine their ICT skills, knowledge of Braille and their use of assistive technologies. An analysis of the results prompted the design and creation of a low-cost Braille device prototype. The cost of this prototype was about one-half that of a commercially available device and can be used without a screen reader. This device should help create equal opportunities for the visually challenged in Barbados and other developing countries. It should also allow the visually challenged to become more independent.

  12. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy.

    Science.gov (United States)

    Chang, Wen-Chung; Chen, Chin-Sheng; Tai, Hung-Chi; Liu, Chia-Yuan; Chen, Yu-Jen

    2014-01-01

    The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV) in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US) and computed tomography (CT) scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy.

  13. Four-dimensional microscope- integrated optical coherence tomography to enhance visualization in glaucoma surgeries.

    Science.gov (United States)

    Pasricha, Neel Dave; Bhullar, Paramjit Kaur; Shieh, Christine; Viehland, Christian; Carrasco-Zevallos, Oscar Mijail; Keller, Brenton; Izatt, Joseph Adam; Toth, Cynthia Ann; Challa, Pratap; Kuo, Anthony Nanlin

    2017-01-01

    We report the first use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT) capable of live four-dimensional (4D) (three-dimensional across time) imaging intraoperatively to directly visualize tube shunt placement and trabeculectomy surgeries in two patients with severe open-angle glaucoma and elevated intraocular pressure (IOP) that was not adequately managed by medical intervention or prior surgery. We performed tube shunt placement and trabeculectomy surgery and used SS-MIOCT to visualize and record surgical steps that benefitted from the enhanced visualization. In the case of tube shunt placement, SS-MIOCT successfully visualized the scleral tunneling, tube shunt positioning in the anterior chamber, and tube shunt suturing. For the trabeculectomy, SS-MIOCT successfully visualized the scleral flap creation, sclerotomy, and iridectomy. Postoperatively, both patients did well, with IOPs decreasing to the target goal. We found the benefit of SS-MIOCT was greatest in surgical steps requiring depth-based assessments. This technology has the potential to improve clinical outcomes.

  14. Integrated visualization of simulation results and experimental devices in virtual-reality space

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi

    2011-01-01

    We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)

  15. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    Science.gov (United States)

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  16. DEVELOPMENT OF FINE MOTOR COORDINATION AND VISUAL-MOTOR INTEGRATION IN PRESCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    Haris MEMISEVIC

    2013-03-01

    Full Text Available Fine motor skills are prerequisite for many everyday activities and they are a good predictor of a child's later academic outcome. The goal of the present study was to assess the effects of age on the development of fine motor coordination and visual-motor integration in preschool children. The sample for this study consisted of 276 preschool children from Canton Sara­jevo, Bosnia and Herzegovina. We assessed children's motor skills with Beery Visual Motor Integration Test and Lafayette Pegboard Test. Data were analyzed with one-way ANOVA, followed by planned com­parisons between the age groups. We also performed a regression analysis to assess the influence of age and motor coordination on visual-motor integration. The results showed that age has a great effect on the development of fine motor skills. Furthermore, the results indicated that there are possible sensitive periods at preschool age in which the development of fine motor skills is accelerated. Early intervention specialists should make a thorough evaluations of fine motor skills in preschool children and make motor (rehabilitation programs for children at risk of fine motor delays.

  17. Visual Cycle Modulation as an Approach toward Preservation of Retinal Integrity.

    Directory of Open Access Journals (Sweden)

    Claes Bavik

    Full Text Available Increased exposure to blue or visible light, fluctuations in oxygen tension, and the excessive accumulation of toxic retinoid byproducts places a tremendous amount of stress on the retina. Reduction of visual chromophore biosynthesis may be an effective method to reduce the impact of these stressors and preserve retinal integrity. A class of non-retinoid, small molecule compounds that target key proteins of the visual cycle have been developed. The first candidate in this class of compounds, referred to as visual cycle modulators, is emixustat hydrochloride (emixustat. Here, we describe the effects of emixustat, an inhibitor of the visual cycle isomerase (RPE65, on visual cycle function and preservation of retinal integrity in animal models. Emixustat potently inhibited isomerase activity in vitro (IC50 = 4.4 nM and was found to reduce the production of visual chromophore (11-cis retinal in wild-type mice following a single oral dose (ED50 = 0.18 mg/kg. Measure of drug effect on the retina by electroretinography revealed a dose-dependent slowing of rod photoreceptor recovery (ED50 = 0.21 mg/kg that was consistent with the pattern of visual chromophore reduction. In albino mice, emixustat was shown to be effective in preventing photoreceptor cell death caused by intense light exposure. Pre-treatment with a single dose of emixustat (0.3 mg/kg provided a ~50% protective effect against light-induced photoreceptor cell loss, while higher doses (1-3 mg/kg were nearly 100% effective. In Abca4-/- mice, an animal model of excessive lipofuscin and retinoid toxin (A2E accumulation, chronic (3 month emixustat treatment markedly reduced lipofuscin autofluorescence and reduced A2E levels by ~60% (ED50 = 0.47 mg/kg. Finally, in the retinopathy of prematurity rodent model, treatment with emixustat during the period of ischemia and reperfusion injury produced a ~30% reduction in retinal neovascularization (ED50 = 0.46mg/kg. These data demonstrate the ability of

  18. Visual integration dysfunction in schizophrenia arises by the first psychotic episode and worsens with illness duration.

    Science.gov (United States)

    Keane, Brian P; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M

    2016-05-01

    Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on the number of noise elements co-presented with the shape. For half of the experiment, the entire display was scaled down in size to produce a high spatial frequency (HSF) condition, which has been shown to worsen patient integration deficits. Catch trials-in which the circular target appeared without noise-were also added so as to confirm that subjects were paying adequate attention. We found that controls integrated contours under noisier conditions than FEs, who, in turn, integrated better than SZs. These differences, which were at times large in magnitude (d = 1.7), clearly emerged only for HSF displays. Catch trial accuracy was above 95% for each group and could not explain the foregoing differences. Prolonged illness duration predicted poorer HSF integration across patients, but age had little effect on controls, indicating that the former factor was driving the effect in patients. Taken together, a brief psychophysical task efficiently demonstrates large visual integration impairments in schizophrenia. The deficit arises by the first psychotic episode, worsens with illness duration, and may serve as a biomarker of illness progression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Real-Time Lane Detection on Suburban Streets Using Visual Cue Integration

    Directory of Open Access Journals (Sweden)

    Shehan Fernando

    2014-04-01

    Full Text Available The detection of lane boundaries on suburban streets using images obtained from video constitutes a challenging task. This is mainly due to the difficulties associated with estimating the complex geometric structure of lane boundaries, the quality of lane markings as a result of wear, occlusions by traffic, and shadows caused by road-side trees and structures. Most of the existing techniques for lane boundary detection employ a single visual cue and will only work under certain conditions and where there are clear lane markings. Also, better results are achieved when there are no other on-road objects present. This paper extends our previous work and discusses a novel lane boundary detection algorithm specifically addressing the abovementioned issues through the integration of two visual cues. The first visual cue is based on stripe-like features found on lane lines extracted using a two-dimensional symmetric Gabor filter. The second visual cue is based on a texture characteristic determined using the entropy measure of the predefined neighbourhood around a lane boundary line. The visual cues are then integrated using a rule-based classifier which incorporates a modified sequential covering algorithm to improve robustness. To separate lane boundary lines from other similar features, a road mask is generated using road chromaticity values estimated from CIE L*a*b* colour transformation. Extraneous points around lane boundary lines are then removed by an outlier removal procedure based on studentized residuals. The lane boundary lines are then modelled with Bezier spline curves. To validate the algorithm, extensive experimental evaluation was carried out on suburban streets and the results are presented.

  20. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy

    Directory of Open Access Journals (Sweden)

    Chang WC

    2014-06-01

    Full Text Available Wen-Chung Chang,1,* Chin-Sheng Chen,2,* Hung-Chi Tai,3 Chia-Yuan Liu,4,5 Yu-Jen Chen3 1Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan; 2Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei, Taiwan; 3Department of Radiation Oncology, Mackay Memorial Hospital, Taipei, Taiwan; 4Department of Internal Medicine, Mackay Memorial Hospital, Taipei, Taiwan; 5Department of Medicine, Mackay Medical College, New Taipei City, Taiwan  *These authors contributed equally to this work Abstract: The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US and computed tomography (CT scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy. Keywords: ultrasound, computerized tomography

  1. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    Science.gov (United States)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  2. Constituents of Music and Visual-Art Related Pleasure - A Critical Integrative Literature Review.

    Science.gov (United States)

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver's internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure

  3. Constituents of Music and Visual-Art Related Pleasure – A Critical Integrative Literature Review

    Directory of Open Access Journals (Sweden)

    Marianne Tiihonen

    2017-07-01

    Full Text Available The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic, twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1 What is the role of the keyword in the research question? (2 Is pleasure considered a result of variation in the perceiver’s internal or external attributes? (3 What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music

  4. Constituents of Music and Visual-Art Related Pleasure – A Critical Integrative Literature Review

    Science.gov (United States)

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver’s internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure

  5. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    Science.gov (United States)

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system

  6. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  7. Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor

    Science.gov (United States)

    Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn

    2009-01-01

    The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion

  8. Questionnaire-based person trip visualization and its integration to quantitative measurements in Myanmar

    Science.gov (United States)

    Kimijiama, S.; Nagai, M.

    2016-06-01

    With telecommunication development in Myanmar, person trip survey is supposed to shift from conversational questionnaire to GPS survey. Integration of both historical questionnaire data to GPS survey and visualizing them are very important to evaluate chronological trip changes with socio-economic and environmental events. The objectives of this paper are to: (a) visualize questionnaire-based person trip data, (b) compare the errors between questionnaire and GPS data sets with respect to sex and age and (c) assess the trip behaviour in time-series. Totally, 345 individual respondents were selected through random stratification to assess person trip using a questionnaire and GPS survey for each. Conversion of trip information such as a destination from the questionnaires was conducted by using GIS. The results show that errors between the two data sets in the number of trips, total trip distance and total trip duration are 25.5%, 33.2% and 37.2%, respectively. The smaller errors are found among working-age females mainly employed with the project-related activities generated by foreign investment. The trip distant was yearly increased. The study concluded that visualization of questionnaire-based person trip data and integrating them to current quantitative measurements are very useful to explore historical trip changes and understand impacts from socio-economic events.

  9. An integrated domain specific language for post-processing and visualizing electrophysiological signals in Java.

    Science.gov (United States)

    Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R

    2010-01-01

    Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.

  10. An integrated theory of attention and decision making in visual signal detection.

    Science.gov (United States)

    Smith, Philip L; Ratcliff, Roger

    2009-04-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions. (c) 2009 APA, all rights reserved

  11. Experiences of Individuals With Visual Impairments in Integrated Physical Education: A Retrospective Study.

    Science.gov (United States)

    Haegele, Justin A; Zhu, Xihe

    2017-12-01

    The purpose of this retrospective study was to examine the experiences of adults with visual impairments during school-based integrated physical education (PE). An interpretative phenomenological analysis (IPA) research approach was used and 16 adults (ages 21-48 years; 10 women, 6 men) with visual impairments acted as participants for this study. The primary sources of data were semistructured audiotaped telephone interviews and reflective field notes, which were recorded during and immediately following each interview. Thematic development was undertaken utilizing a 3-step analytical process guided by IPA. Based on the data analysis, 3 interrelated themes emerged from the participant transcripts: (a) feelings about "being put to the side," frustration and inadequacy; (b) "She is blind, she can't do it," debilitating feelings from physical educators' attitudes; and (c) "not self-esteem raising," feelings about peer interactions. The 1st theme described the participants' experiences and ascribed meaning to exclusionary practices. The 2nd theme described the participants' frustration over being treated differently by their PE teachers because of their visual impairments. Lastly, "not self-esteem raising," feelings about peer interactions demonstrated how participants felt about issues regarding challenging social situations with peers in PE. Utilizing an IPA approach, the researchers uncovered 3 interrelated themes that depicted central feelings, experiences, and reflections, which informed the meaning of the participants' PE experiences. The emerged themes provide unique insight into the embodied experiences of those with visual impairments in PE and fill a previous gap in the extant literature.

  12. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  13. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 1 : summary report.

    Science.gov (United States)

    2009-12-01

    The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge : Engineers at the state and local level from the following aspects: : Better understanding and enforcement of a complex ...

  14. Integrating the Visual Arts Back into the Classroom with Mobile Applications: Teaching beyond the "Click and View" Approach

    Science.gov (United States)

    Katz-Buonincontro, Jen; Foster, Aroutis

    2013-01-01

    Teachers can use mobile applications to integrate the visual arts back into the classroom, but how? This article generates recommendations for selecting and using well-designed mobile applications in the visual arts beyond a "click and view " approach. Using quantitative content analysis, the results show the extent to which a sample of…

  15. Cultivating Common Ground: Integrating Standards-Based Visual Arts, Math and Literacy in High-Poverty Urban Classrooms

    Science.gov (United States)

    Cunnington, Marisol; Kantrowitz, Andrea; Harnett, Susanne; Hill-Ries, Aline

    2014-01-01

    The "Framing Student Success: Connecting Rigorous Visual Arts, Math and Literacy Learning" experimental demonstration project was designed to develop and test an instructional program integrating high-quality, standards-based instruction in the visual arts, math, and literacy. Developed and implemented by arts-in-education organization…

  16. An integrated audio-visual impact tool for wind turbine installations

    International Nuclear Information System (INIS)

    Lymberopoulos, N.; Belessis, M.; Wood, M.; Voutsinas, S.

    1996-01-01

    An integrated software tool was developed for the design of wind parks that takes into account their visual and audio impact. The application is built on a powerful hardware platform and is fully operated through a graphic user interface. The topography, the wind turbines and the daylight conditions are realised digitally. The wind park can be animated in real time and the user can take virtual walks in it while the set-up of the park can be altered interactively. In parallel, the wind speed levels on the terrain, the emitted noise intensity, the annual energy output and the cash flow can be estimated at any stage of the session and prompt the user for rearrangements. The tool has been used to visually simulate existing wind parks in St. Breok, UK and Andros Island, Greece. The results lead to the conclusion that such a tool can assist to the public acceptance and licensing procedures of wind parks. (author)

  17. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    Science.gov (United States)

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  19. Visual feature integration indicated by pHase-locked frontal-parietal EEG signals.

    Science.gov (United States)

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30-38 Hz, 175-250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information.

  20. What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory.

    Science.gov (United States)

    Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-12-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.

  1. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    Science.gov (United States)

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Two items remembered as precisely as one: how integral features can improve visual working memory.

    Science.gov (United States)

    Bae, Gi Yeul; Flombaum, Jonathan I

    2013-10-01

    In the ongoing debate about the efficacy of visual working memory for more than three items, a consensus has emerged that memory precision declines as memory load increases from one to three. Many studies have reported that memory precision seems to be worse for two items than for one. We argue that memory for two items appears less precise than that for one only because two items present observers with a correspondence challenge that does not arise when only one item is stored--the need to relate observations to their corresponding memory representations. In three experiments, we prevented correspondence errors in two-item trials by varying sample items along task-irrelevant but integral (as opposed to separable) dimensions. (Initial experiments with a classic sorting paradigm identified integral feature relationships.) In three memory experiments, our manipulation produced equally precise representations of two items and of one item.

  3. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  4. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  5. AppEEARS: A Simple Tool that Eases Complex Data Integration and Visualization Challenges for Users

    Science.gov (United States)

    Maiersperger, T.

    2017-12-01

    The Application for Extracting and Exploring Analysis-Ready Samples (AppEEARS) offers a simple and efficient way to perform discovery, processing, visualization, and acquisition across large quantities and varieties of Earth science data. AppEEARS brings significant value to a very broad array of user communities by 1) significantly reducing data volumes, at-archive, based on user-defined space-time-variable subsets, 2) promoting interoperability across a wide variety of datasets via format and coordinate reference system harmonization, 3) increasing the velocity of both data analysis and insight by providing analysis-ready data packages and by allowing interactive visual exploration of those packages, and 4) ensuring veracity by making data quality measures more apparent and usable and by providing standards-based metadata and processing provenance. Development and operation of AppEEARS is led by the National Aeronautics and Space Administration (NASA) Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC also partners with several other archives to extend the capability across a larger federation of geospatial data providers. Over one hundred datasets are currently available, covering a diversity of variables including land cover, population, elevation, vegetation indices, and land surface temperature. Many hundreds of users have already used this new web-based capability to make the complex tasks of data integration and visualization much simpler and more efficient.

  6. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  7. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    Science.gov (United States)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows

  8. Numerical integration methods and layout improvements in the context of dynamic RNA visualization.

    Science.gov (United States)

    Shabash, Boris; Wiese, Kay C

    2017-05-30

    RNA visualization software tools have traditionally presented a static visualization of RNA molecules with limited ability for users to interact with the resulting image once it is complete. Only a few tools allowed for dynamic structures. One such tool is jViz.RNA. Currently, jViz.RNA employs a unique method for the creation of the RNA molecule layout by mapping the RNA nucleotides into vertexes in a graph, which we call the detailed graph, and then utilizes a Newtonian mechanics inspired system of forces to calculate a layout for the RNA molecule. The work presented here focuses on improvements to jViz.RNA that allow the drawing of RNA secondary structures according to common drawing conventions, as well as dramatic run-time performance improvements. This is done first by presenting an alternative method for mapping the RNA molecule into a graph, which we call the compressed graph, and then employing advanced numerical integration methods for the compressed graph representation. Comparing the compressed graph and detailed graph implementations, we find that the compressed graph produces results more consistent with RNA drawing conventions. However, we also find that employing the compressed graph method requires a more sophisticated initial layout to produce visualizations that would require minimal user interference. Comparing the two numerical integration methods demonstrates the higher stability of the Backward Euler method, and its resulting ability to handle much larger time steps, a high priority feature for any software which entails user interaction. The work in this manuscript presents the preferred use of compressed graphs to detailed ones, as well as the advantages of employing the Backward Euler method over the Forward Euler method. These improvements produce more stable as well as visually aesthetic representations of the RNA secondary structures. The results presented demonstrate that both the compressed graph representation, as well as the Backward

  9. Integration of intraoperative stereovision imaging for brain shift visualization during image-guided cranial procedures

    Science.gov (United States)

    Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Roberts, David W.; Paulsen, Keith D.; Simon, David A.

    2014-03-01

    Dartmouth and Medtronic Navigation have established an academic-industrial partnership to develop, validate, and evaluate a multi-modality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. A stereovision system has been developed and optimized for intraoperative use through integration with a surgical microscope and an image-guided surgery system. The microscope optics and stereovision CCD sensors are localized relative to the surgical field using optical tracking and can efficiently acquire stereo image pairs from which a localized 3D profile of the exposed surface is reconstructed. This paper reports the first demonstration of intraoperative acquisition, reconstruction and visualization of 3D stereovision surface data in the context of an industry-standard image-guided surgery system. The integrated system is capable of computing and presenting a stereovision-based update of the exposed cortical surface in less than one minute. Alternative methods for visualization of high-resolution, texture-mapped stereovision surface data are also investigated with the objective of determining the technical feasibility of direct incorporation of intraoperative stereo imaging into future iterations of Medtronic's navigation platform.

  10. Beta, but not gamma, band oscillations index visual form-motion integration.

    Directory of Open Access Journals (Sweden)

    Charles Aissani

    Full Text Available Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou's figure (bound percept or as pairs of bars oscillating independently along cardinal axes (unbound percept. We found that beta (15-25 Hz, but not gamma (55-85 Hz oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.

  11. Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration.

    Science.gov (United States)

    Doehrmann, Oliver; Naumer, Marcus J

    2008-11-25

    By using meaningful stimuli, multisensory research has recently started to investigate the impact of stimulus content on crossmodal integration. Variations in this respect have often been termed as "semantic". In this paper we will review work related to the question for which tasks the influence of semantic factors has been found and which cortical networks are most likely to mediate these effects. More specifically, the focus of this paper will be on processing of object stimuli presented in the auditory and visual sensory modalities. Furthermore, we will investigate which cortical regions are particularly responsive to experimental variations of content by comparing semantically matching ("congruent") and mismatching ("incongruent") experimental conditions. In this context, recent neuroimaging studies point toward a possible functional differentiation of temporal and frontal cortical regions, with the former being more responsive to semantically congruent and the latter to semantically incongruent audio-visual (AV) stimulation. To account for these differential effects, we will suggest in the final section of this paper a possible synthesis of these data on semantic modulation of AV integration with findings from neuroimaging studies and theoretical accounts of semantic memory.

  12. Perceptual stimulus-A Bayesian-based integration of multi-visual-cue approach and its application

    Institute of Scientific and Technical Information of China (English)

    XUE JianRu; ZHENG NanNing; ZHONG XiaoPin; PING LinJiang

    2008-01-01

    With the view that visual cue could be taken as a kind of stimulus, the study of the mechanism in the visual perception process by using visual cues in their probabilistic representation eventually leads to a class of statistical integration of multiple visual cues (IMVC) methods which have been applied widely in perceptual grouping, video analysis, and other basic problems in computer vision. In this paper, a survey on the basic ideas and recent advances of IMVC methods is presented, and much focus is on the models and algorithms of IMVC for video analysis within the framework of Bayesian estimation. Furthermore, two typical problems in video analysis, robust visual tracking and "switching problem" in multi-target tracking (MTT) are taken as test beds to verify a series of Bayesian-based IMVC methods proposed by the authors. Furthermore, the relations between the statistical IMVC and the visual per-ception process, as well as potential future research work for IMVC, are discussed.

  13. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    Science.gov (United States)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct

  14. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  15. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  16. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  17. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    Science.gov (United States)

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  18. Effects of a Memory and Visual-Motor Integration Program for Older Adults Based on Self-Efficacy Theory.

    Science.gov (United States)

    Kim, Eun Hwi; Suh, Soon Rim

    2017-06-01

    This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science

  19. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  20. Effects of temporal integration on the shape of visual backward masking functions.

    Science.gov (United States)

    Francis, Gregory; Cho, Yang Seok

    2008-10-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be monotonic increasing but other times is U-shaped. Theories of backward masking have long hypothesized that temporal integration of the target and mask influences properties of masking but have not connected the influence of integration with the shape of the masking function. With two experiments that vary the spatial properties of the target and mask, the authors provide evidence that temporal integration of the stimuli plays a critical role in determining the shape of the masking function. The resulting data both challenge current theories of backward masking and indicate what changes to the theories are needed to account for the new data. The authors further discuss the implication of the findings for uses of backward masking to explore other aspects of cognition.

  1. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    OpenAIRE

    Akristiniy Vera A.; Dikova Elena A.

    2018-01-01

    The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account t...

  2. The visual air quality predicted by conventional and scanning teleradiometers and an integrating nephelometer

    Energy Technology Data Exchange (ETDEWEB)

    Malm, W [U.S. Environmental Protection Agency, Las Vegas, NV; Pitchford, A; Tree, R; Walther, E; Pearson, M; Archer, S

    1981-12-01

    Many Class I areas have unique vistas which require an observer to look over complex terrain containing basins, valleys, and canyons. These topographic features tend to form pollution ''basins'' and ''corridors'' that trap and funnel air pollutants under certain meteorological conditions. For example, on numerous days, layers of haze in the San Juan River Basin obscure various vista elements including the Chuska Mountains as viewed from Mesa Verde National Park, CO. Measrements by an integrating nephelometer and conventional teleradiometer at one location in Mesa Verde do not quantify inhomogeneities. In this paper, data from these instruments are compated to data derived from scanning teleradiometer measurements of photographic slide images. The slides, surrogates of the real three-dimensional scene, were projected and scanned to determine relative sky and vista radiance at 40 points within a vertical slice of the vista. Comparison of the corresponding visual range data sets for each instrument for September and December 1979 demonstrates the utility of the scanning teleradiometer.

  3. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  4. Defective chromatic and achromatic visual pathways in developmental dyslexia: Cues for an integrated intervention programme.

    Science.gov (United States)

    Bonfiglio, Luca; Bocci, Tommaso; Minichilli, Fabrizio; Crecchi, Alessandra; Barloscio, Davide; Spina, Donata Maria; Rossi, Bruno; Sartucci, Ferdinando

    2017-01-01

    As well as obtaining confirmation of the magnocellular system involvement in developmental dyslexia (DD); the aim was primarily to search for a possible involvement of the parvocellular system; and, furthermore, to complete the assessment of the visual chromatic axis by also analysing the koniocellular system. Visual evoked potentials (VEPs) in response to achromatic stimuli with low luminance contrast and low spatial frequency, and isoluminant red/green and blue/yellow stimuli with high spatial frequency were recorded in 10 dyslexic children and 10 age- and sex-matched, healthy subjects. Dyslexic children showed delayed VEPs to both achromatic stimuli (magnocellular-dorsal stream) and isoluminant red/green and blue/yellow stimuli (parvocellular-ventral and koniocellular streams). To our knowledge, this is the first time that a dysfunction of colour vision has been brought to light in an objective way (i.e., by means of electrophysiological methods) in children with DD. These results give rise to speculation concerning the need for a putative approach for promoting both learning how to read and/or improving existing reading skills of children with or at risk of DD. The working hypothesis would be to combine two integrated interventions in a single programme aimed at fostering the function of both the magnocellular and the parvocellular streams.

  5. Sensory processing patterns predict the integration of information held in visual working memory.

    Science.gov (United States)

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Constituting fully integrated visual analysis system for Cu(II) on TiO₂/cellulose paper.

    Science.gov (United States)

    Li, Shun-Xing; Lin, Xiaofeng; Zheng, Feng-Ying; Liang, Wenjie; Zhong, Yanxue; Cai, Jiabai

    2014-07-15

    As a cheap and abundant porous material, cellulose filter paper was used to immobilize nano-TiO2 and denoted as TiO2/cellulose paper (TCP). With high adsorption capacity for Cu(II) (more than 1.65 mg), TCP was used as an adsorbent, photocatalyst, and colorimetric sensor at the same time. Under the optimum adsorption conditions, i.e., pH 6.5 and 25 °C, the adsorption ratio of Cu(II) was higher than 96.1%. Humic substances from the matrix could be enriched onto TCP but the interference of their colors on colorimetric detection could be eliminated by the photodegradation. In the presence of hydroxylamine, neocuproine, as a selective indicator, was added onto TCP, and a visual color change from white to orange was generated. The concentration of Cu(II) was quantified by the color intensity images using image processing software. This fully integrated visual analysis system was successfully applied for the detection of Cu(II) in 10.0 L of drinking water and seawater with a preconcentration factor of 10(4). The log-linear calibration curve for Cu(II) was in the range of 0.5-50.0 μg L(-1) with a determination coefficient (R(2)) of 0.985 and its detection limit was 0.073 μg L(-1).

  7. Development of an exergy-electrical analogy for visualizing and modeling building integrated energy systems

    International Nuclear Information System (INIS)

    Saloux, E.; Teyssedou, A.; Sorin, M.

    2015-01-01

    Highlights: • The exergy-electrical analogy is developed for energy systems used in buildings. • This analogy has been developed for a complete set of system arrangement options. • Different possibilities of inter-connections are illustrated using analog switches. • Adaptability and utility of the diagram over traditional ones are emphasized. - Abstract: An exergy-electrical analogy, similar to the heat transfer electrical one, is developed and applied to the case of integrated energy systems operating in buildings. Its construction is presented for the case of space heating with electric heaters, heat pumps and solar collectors. The proposed analogy has been applied to a set of system arrangement options proposed for satisfying the building heating demand (space heating, domestic hot water); different alternatives to connect the units have been presented with switches in a visualization scheme. The analogy for such situation has been performed and the study of a solar assisted heat pump using ice storage has been investigated. This diagram directly permits energy paths and their associated exergy destruction to be visualized; hence, sources of irreversibility are identifiable. It can be helpful for the comprehension of the global process and its operation as well as for identifying exergy losses. The method used to construct the diagram makes it easily adaptable to others units or structures or to others models depending on the complexity of the process. The use of switches could be very useful for optimization purposes

  8. Visual-Motor Integration in Children With Mild Intellectual Disability: A Meta-Analysis.

    Science.gov (United States)

    Memisevic, Haris; Djordjevic, Mirjana

    2018-01-01

    Visual-motor integration (VMI) skills, defined as the coordination of fine motor and visual perceptual abilities, are a very good indicator of a child's overall level of functioning. Research has clearly established that children with intellectual disability (ID) have deficits in VMI skills. This article presents a meta-analytic review of 10 research studies involving 652 children with mild ID for which a VMI skills assessment was also available. We measured the standardized mean difference (Hedges' g) between scores on VMI tests of these children with mild ID and either typically developing children's VMI test scores in these studies or normative mean values on VMI tests used by the studies. While mild ID is defined in part by intelligence scores that are two to three standard deviations below those of typically developing children, the standardized mean difference of VMI differences between typically developing children and children with mild ID in this meta-analysis was 1.75 (95% CI [1.11, 2.38]). Thus, the intellectual and adaptive skill deficits of children with mild ID may be greater (perhaps especially due to their abstract and conceptual reasoning deficits) than their relative VMI deficits. We discuss the possible meaning of this relative VMI strength among children with mild ID and suggest that their stronger VMI skills may be a target for intensive academic interventions as a means of attenuating problems in adaptive functioning.

  9. Orientation is different: Interaction between contour integration and feature contrasts in visual search.

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei; Zhaoping, Li

    2013-09-10

    Salient items usually capture attention and are beneficial to visual search. Jingling and Tseng (2013), nevertheless, have discovered that a salient collinear column can impair local visual search. The display used in that study had 21 rows and 27 columns of bars, all uniformly horizontal (or vertical) except for one column of bars orthogonally oriented to all other bars, making this unique column of collinear (or noncollinear) bars salient in the display. Observers discriminated an oblique target bar superimposed on one of the bars either in the salient column or in the background. Interestingly, responses were slower for a target in a salient collinear column than in the background. This opens a theoretical question of how contour integration interacts with salience computation, which is addressed here by an examination of how salience modulated the search impairment from the collinear column. We show that the collinear column needs to have a high orientation contrast with its neighbors to exert search interference. A collinear column of high contrast in color or luminance did not produce the same impairment. Our results show that orientation-defined salience interacted with collinear contour differently from other feature dimensions, which is consistent with the neuronal properties in V1.

  10. Inner resources for survival: integrating interpersonal psychotherapy with spiritual visualization with homeless youth.

    Science.gov (United States)

    Mastropieri, Biagio; Schussel, Lorne; Forbes, David; Miller, Lisa

    2015-06-01

    Homeless youth have particular need to develop inner resources to confront the stress, abusive environment of street life, and the paucity of external resources. Research suggests that treatment supporting spiritual awareness and growth may create a foundation for coping, relationships, and negotiating styles to mitigate distress. The current pilot study tests the feasibility, acceptability, and helpfulness of an interpersonal spiritual group psychotherapy, interpersonal psychotherapy (IPT) integrated with spiritual visualization (SV), offered through a homeless shelter, toward improving interpersonal coping and ameliorating symptoms of depression, distress, and anxiety in homeless youth. An exploratory pilot of integrative group psychotherapy (IPT + SV) for homeless young adults was conducted in a New York City on the residential floor of a shelter-based transitional living program. Thirteen young adult men (mean age 20.3 years, SD = 1.06) participated in a weekly evening psychotherapy group (55 % African-American, 18 % biracial, 18 % Hispanic, 9 % Caucasian). Measures of psychological functioning were assessed at pre-intervention and post-intervention using the General Health Questionnaire (GHQ-12), Patient Health Questionnaire (PHQ-9, GAD-7), and the Inventory of Interpersonal Problems (IIP-32). A semi-structured exit interview and a treatment satisfaction questionnaire were also employed to assess acceptability following treatment. Among homeless young adults to participate in the group treatment, significant decreases in symptoms of general distress and depression were found between baseline and termination of treatment, and at the level of a trend, improvement in overall interpersonal functioning and levels of general anxiety. High utilization and treatment satisfaction showed the intervention to be both feasible and acceptable. Offered as an adjunct to the services-as-usual model at homeless shelters serving young adults, interpersonal psychotherapy

  11. Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data.

    Science.gov (United States)

    Detwiler, Landon T; Suciu, Dan; Franklin, Joshua D; Moore, Eider B; Poliakov, Andrew V; Lee, Eunjung S; Corina, David P; Ojemann, George A; Brinkley, James F

    2009-01-01

    This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too "heavyweight" for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a "lightweight" distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts.

  12. Brain activity patterns uniquely supporting visual feature integration after traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Anjali eRaja Beharelle

    2011-12-01

    Full Text Available Traumatic brain injury (TBI patients typically respond more slowly and with more variability than controls during tasks of attention requiring speeded reaction time. These behavioral changes are attributable, at least in part, to diffuse axonal injury (DAI, which affects integrated processing in distributed systems. Here we use a multivariate method sensitive to distributed neural activity to compare brain activity patterns of patients with chronic phase moderate-to-severe TBI to those of controls during performance on a visual feature-integration task assessing complex attentional processes that has previously shown sensitivity to TBI. The TBI patients were carefully screened to be free of large focal lesions that can affect performance and brain activation independently of DAI. The task required subjects to hold either one or three features of a target in mind while suppressing responses to distracting information. In controls, the multi-feature condition activated a distributed network including limbic, prefrontal, and medial temporal structures. TBI patients engaged this same network in the single-feature and baseline conditions. In multi-feature presentations, TBI patients alone activated additional frontal, parietal, and occipital regions. These results are consistent with neuroimaging studies using tasks assessing different cognitive domains, where increased spread of brain activity changes was associated with TBI. Our results also extend previous findings that brain activity for relatively moderate task demands in TBI patients is similar to that associated with of high task demands in controls.

  13. Lack of color integration in visual short-term memory binding.

    Science.gov (United States)

    Parra, Mario A; Cubelli, Roberto; Della Sala, Sergio

    2011-10-01

    Bicolored objects are retained in visual short-term memory (VSTM) less efficiently than unicolored objects. This is unlike shape-color combinations, whose retention in VSTM does not differ from that observed for shapes only. It is debated whether this is due to a lack of color integration and whether this may reflect the function of separate memory mechanisms. Participants judged whether the colors of bicolored objects (each with an external and an internalcolor) were the same or different across two consecutive screens. Colors had to be remembered either individually or in combination. In Experiment 1, external colors in the combined colors condition were remembered better than the internal colors, and performance for both was worse than that in the individual colors condition. The lack of color integration observed in Experiment 1 was further supported by a reduced capacity of VSTM to retain color combinations, relative to individual colors (Experiment 2). An additional account was found in Experiment 3, which showed spared color-color binding in the presence of impaired shape-color binding in a brain-damaged patient, thus suggesting that these two memory mechanisms are different.

  14. Object-based attention benefits reveal selective abnormalities of visual integration in autism.

    Science.gov (United States)

    Falter, Christine M; Grant, Kate C Plaisted; Davis, Greg

    2010-06-01

    A pervasive integration deficit could provide a powerful and elegant account of cognitive processing in autism spectrum disorders (ASD). However, in the case of visual Gestalt grouping, typically assessed by tasks that require participants explicitly to introspect on their own grouping perception, clear evidence for such a deficit remains elusive. To resolve this issue, we adopt an index of Gestalt grouping from the object-based attention literature that does not require participants to assess their own grouping perception. Children with ASD and mental- and chronological-age matched typically developing children (TD) performed speeded orientation discriminations of two diagonal lines. The lines were superimposed on circles that were either grouped together or segmented on the basis of color, proximity or these two dimensions in competition. The magnitude of performance benefits evident for grouped circles, relative to ungrouped circles, provided an index of grouping under various conditions. Children with ASD showed comparable grouping by proximity to the TD group, but reduced grouping by similarity. ASD seems characterized by a selective bias away from grouping by similarity combined with typical levels of grouping by proximity, rather than by a pervasive integration deficit.

  15. PathText: a text mining integrator for biological pathway visualizations

    Science.gov (United States)

    Kemper, Brian; Matsuzaki, Takuya; Matsuoka, Yukiko; Tsuruoka, Yoshimasa; Kitano, Hiroaki; Ananiadou, Sophia; Tsujii, Jun'ichi

    2010-01-01

    Motivation: Metabolic and signaling pathways are an increasingly important part of organizing knowledge in systems biology. They serve to integrate collective interpretations of facts scattered throughout literature. Biologists construct a pathway by reading a large number of articles and interpreting them as a consistent network, but most of the models constructed currently lack direct links to those articles. Biologists who want to check the original articles have to spend substantial amounts of time to collect relevant articles and identify the sections relevant to the pathway. Furthermore, with the scientific literature expanding by several thousand papers per week, keeping a model relevant requires a continuous curation effort. In this article, we present a system designed to integrate a pathway visualizer, text mining systems and annotation tools into a seamless environment. This will enable biologists to freely move between parts of a pathway and relevant sections of articles, as well as identify relevant papers from large text bases. The system, PathText, is developed by Systems Biology Institute, Okinawa Institute of Science and Technology, National Centre for Text Mining (University of Manchester) and the University of Tokyo, and is being used by groups of biologists from these locations. Contact: brian@monrovian.com. PMID:20529930

  16. IVAG: An Integrative Visualization Application for Various Types of Genomic Data Based on R-Shiny and the Docker Platform.

    Science.gov (United States)

    Lee, Tae-Rim; Ahn, Jin Mo; Kim, Gyuhee; Kim, Sangsoo

    2017-12-01

    Next-generation sequencing (NGS) technology has become a trend in the genomics research area. There are many software programs and automated pipelines to analyze NGS data, which can ease the pain for traditional scientists who are not familiar with computer programming. However, downstream analyses, such as finding differentially expressed genes or visualizing linkage disequilibrium maps and genome-wide association study (GWAS) data, still remain a challenge. Here, we introduce a dockerized web application written in R using the Shiny platform to visualize pre-analyzed RNA sequencing and GWAS data. In addition, we have integrated a genome browser based on the JBrowse platform and an automated intermediate parsing process required for custom track construction, so that users can easily build and navigate their personal genome tracks with in-house datasets. This application will help scientists perform series of downstream analyses and obtain a more integrative understanding about various types of genomic data by interactively visualizing them with customizable options.

  17. ICT integration in mathematics initial teacher training and its impact on visualization: the case of GeoGebra

    Science.gov (United States)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.

  18. Integration of Audio Visual Multimedia for Special Education Pre-Service Teachers' Self Reflections in Developing Teaching Competencies

    Science.gov (United States)

    Sediyani, Tri; Yufiarti; Hadi, Eko

    2017-01-01

    This study aims to develop a model of learning by integrating multimedia and audio-visual self-reflective learners. This multimedia was developed as a tool for prospective teachers as learners in the education of children with special needs to reflect on their teaching competencies before entering the world of education. Research methods to…

  19. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    Science.gov (United States)

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  20. Pedagogy and Quality in Indian Slum School Settings: A Bernsteinian Analysis of Visual Representations in the Integrated Child Development Service

    Science.gov (United States)

    Chawla-Duggan, Rita

    2016-01-01

    This paper focuses upon the micro level of the pre-school classroom, taking the example of the Indian Integrated Child Development Service (ICDS), and the discourse of "child-centred" pedagogy that is often associated with quality pre-schooling. Through an analysis of visual data, semi-structured and film elicitation interviews drawn…

  1. ICT Integration in Mathematics Initial Teacher Training and Its Impact on Visualization: The Case of GeoGebra

    Science.gov (United States)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions…

  2. Knowledge and Perceptions of Visual Communications Curriculum in Arkansas Secondary Agricultural Classrooms: A Closer Look at Experiential Learning Integrations

    Science.gov (United States)

    Pennington, Kristin; Calico, Carley; Edgar, Leslie D.; Edgar, Don W.; Johnson, Donald M.

    2015-01-01

    The University of Arkansas developed and integrated visual communications curriculum related to agricultural communications into secondary agricultural programs throughout the state. The curriculum was developed, pilot tested, revised, and implemented by selected secondary agriculture teachers. The primary purpose of this study was to evaluate…

  3. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  4. Dynamic stereoscopic selective visual attention (dssva): integrating motion and shape with depth in video segmentation

    OpenAIRE

    López Bonal, María Teresa; Fernández Caballero, Antonio; Saiz Valverde, Sergio

    2008-01-01

    Depth inclusion as an important parameter for dynamic selective visual attention is presented in this article. The model introduced in this paper is based on two previously developed models, dynamic selective visual attention and visual stereoscopy, giving rise to the so-called dynamic stereoscopic selective visual attention method. The three models are based on the accumulative computation problem-solving method. This paper shows how software reusability enables enhancing results in vision r...

  5. Distributed XQuery-based integration and visualization of multimodality brain mapping data

    Directory of Open Access Journals (Sweden)

    Landon T Detwiler

    2009-01-01

    Full Text Available This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts.

  6. Research on fine management and visualization of ancient architectures based on integration of 2D and 3D GIS technology

    International Nuclear Information System (INIS)

    Jun, Yan; Shaohua, Wang; Jiayuan, Li; Qingwu, Hu

    2014-01-01

    Aimed at ancient architectures which own the characteristics of huge data quantity, fine-grained and high-precise, a 3D fine management and visualization method for ancient architectures based on the integration of 2D and 3D GIS is proposed. Firstly, after analysing various data types and characters of digital ancient architectures, main problems and key technologies existing in the 2D and 3D data management are discussed. Secondly, data storage and indexing model of digital ancient architecture based on 2D and 3D GIS integration were designed and the integrative storage and management of 2D and 3D data were achieved. Then, through the study of data retrieval method based on the space-time indexing and hierarchical object model of ancient architecture, 2D and 3D interaction of fine-grained ancient architectures 3D models was achieved. Finally, take the fine database of Liangyi Temple belonging to Wudang Mountain as an example, fine management and visualization prototype of 2D and 3D integrative digital ancient buildings of Liangyi Temple was built and achieved. The integrated management and visual analysis of 10GB fine-grained model of the ancient architecture was realized and a new implementation method for the store, browse, reconstruction, and architectural art research of ancient architecture model was provided

  7. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  8. Broad-based visual benefits from training with an integrated perceptual-learning video game.

    Science.gov (United States)

    Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R

    2014-06-01

    Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  10. LocusTrack: Integrated visualization of GWAS results and genomic annotation.

    Science.gov (United States)

    Cuellar-Partida, Gabriel; Renteria, Miguel E; MacGregor, Stuart

    2015-01-01

    Genome-wide association studies (GWAS) are an important tool for the mapping of complex traits and diseases. Visual inspection of genomic annotations may be used to generate insights into the biological mechanisms underlying GWAS-identified loci. We developed LocusTrack, a web-based application that annotates and creates plots of regional GWAS results and incorporates user-specified tracks that display annotations such as linkage disequilibrium (LD), phylogenetic conservation, chromatin state, and other genomic and regulatory elements. Currently, LocusTrack can integrate annotation tracks from the UCSC genome-browser as well as from any tracks provided by the user. LocusTrack is an easy-to-use application and can be accessed at the following URL: http://gump.qimr.edu.au/general/gabrieC/LocusTrack/. Users can upload and manage GWAS results and select from and/or provide annotation tracks using simple and intuitive menus. LocusTrack scripts and associated data can be downloaded from the website and run locally.

  11. The beneficial attributes of visual art-making in cancer care: An integrative review.

    Science.gov (United States)

    Ennis, G; Kirshbaum, M; Waheed, N

    2018-01-01

    We seek to understand what is known about the use of visual art-making for people who have a cancer diagnosis, and to explore how art-making may help address fatigue in the cancer care context. Art-making involves creating art or craft alone or in a group and does not require an art-therapist as the emphasis is on creativity rather than an overt therapeutic intention. An integrative review was undertaken of qualitative, quantitative and mixed-method studies on art-making for people who have cancer, at any stage of treatment or recovery. An adapted version of Kaplan's Attention Restoration Theory (ART) was used to interpret the themes found in the literature. Fifteen studies were reviewed. Nine concerned art-making programmes and six were focused on individual, non-facilitated art-making. Review results suggested that programme-based art-making may provide participants with opportunities for learning about self, support, enjoyment and distraction. Individual art-making can provides learning about self, diversion and pleasure, self-management of pain, a sense of control, and enhanced social relationships. When viewed through the lens of ART, art-making can be understood as an energy-restoring activity that has the potential to enhance the lives of people with a diagnosis of cancer. © 2017 John Wiley & Sons Ltd.

  12. Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters

    Directory of Open Access Journals (Sweden)

    Ekaterina Volkova

    2011-10-01

    Full Text Available Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants' emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants' emotion identification more than vocalized emotion. Additionally, individuals did worse on identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.

  13. Integration of Distinct Objects in Visual Working Memory Depends on Strong Objecthood Cues Even for Different-Dimension Conjunctions.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-05-01

    What makes an integrated object in visual working memory (WM)? Past evidence suggested that WM holds all features of multidimensional objects together, but struggles to integrate color-color conjunctions. This difficulty was previously attributed to a challenge in same-dimension integration, but here we argue that it arises from the integration of 2 distinct objects. To test this, we examined the integration of distinct different-dimension features (a colored square and a tilted bar). We monitored the contralateral delay activity, an event-related potential component sensitive to the number of objects in WM. The results indicated that color and orientation belonging to distinct objects in a shared location were not integrated in WM (Experiment 1), even following a common fate Gestalt cue (Experiment 2). These conjunctions were better integrated in a less demanding task (Experiment 3), and in the original WM task, but with a less individuating version of the original stimuli (Experiment 4). Our results identify the critical factor in WM integration at same- versus separate-objects, rather than at same- versus different-dimensions. Compared with the perfect integration of an object's features, the integration of several objects is demanding, and depends on an interaction between the grouping cues and task demands, among other factors. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    Science.gov (United States)

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  15. Introducing the VISAGE project - Visualization for Integrated Satellite, Airborne, and Ground-based data Exploration

    Science.gov (United States)

    Gatlin, P. N.; Conover, H.; Berendes, T.; Maskey, M.; Naeger, A. R.; Wingo, S. M.

    2017-12-01

    A key component of NASA's Earth observation system is its field experiments, for intensive observation of particular weather phenomena, or for ground validation of satellite observations. These experiments collect data from a wide variety of airborne and ground-based instruments, on different spatial and temporal scales, often in unique formats. The field data are often used with high volume satellite observations that have very different spatial and temporal coverage. The challenges inherent in working with such diverse datasets make it difficult for scientists to rapidly collect and analyze the data for physical process studies and validation of satellite algorithms. The newly-funded VISAGE project will address these issues by combining and extending nascent efforts to provide on-line data fusion, exploration, analysis and delivery capabilities. A key building block is the Field Campaign Explorer (FCX), which allows users to examine data collected during field campaigns and simplifies data acquisition for event-based research. VISAGE will extend FCX's capabilities beyond interactive visualization and exploration of coincident datasets, to provide interrogation of data values and basic analyses such as ratios and differences between data fields. The project will also incorporate new, higher level fused and aggregated analysis products from the System for Integrating Multi-platform data to Build the Atmospheric column (SIMBA), which combines satellite and ground-based observations into a common gridded atmospheric column data product; and the Validation Network (VN), which compiles a nationwide database of coincident ground- and satellite-based radar measurements of precipitation for larger scale scientific analysis. The VISAGE proof-of-concept will target "golden cases" from Global Precipitation Measurement Ground Validation campaigns. This presentation will introduce the VISAGE project, initial accomplishments and near term plans.

  16. Visual-motor integration performance in children with severe specific language impairment.

    Science.gov (United States)

    Nicola, K; Watter, P

    2016-09-01

    This study investigated (1) the visual-motor integration (VMI) performance of children with severe specific language impairment (SLI), and any effect of age, gender, socio-economic status and concomitant speech impairment; and (2) the relationship between language and VMI performance. It is hypothesized that children with severe SLI would present with VMI problems irrespective of gender and socio-economic status; however, VMI deficits will be more pronounced in younger children and those with concomitant speech impairment. Furthermore, it is hypothesized that there will be a relationship between VMI and language performance, particularly in receptive scores. Children enrolled between 2000 and 2008 in a school dedicated to children with severe speech-language impairments were included, if they met the criteria for severe SLI with or without concomitant speech impairment which was verified by a government organization. Results from all initial standardized language and VMI assessments found during a retrospective review of chart files were included. The final study group included 100 children (males = 76), from 4 to 14 years of age with mean language scores at least 2SD below the mean. For VMI performance, 52% of the children scored below -1SD, with 25% of the total group scoring more than 1.5SD below the mean. Age, gender and the addition of a speech impairment did not impact on VMI performance; however, children living in disadvantaged suburbs scored significantly better than children residing in advantaged suburbs. Receptive language scores of the Clinical Evaluation of Language Fundamentals was the only score associated with and able to predict VMI performance. A small subgroup of children with severe SLI will also have poor VMI skills. The best predictor of poor VMI is receptive language scores on the Clinical Evaluation of Language Fundamentals. Children with poor receptive language performance may benefit from VMI assessment and multidisciplinary

  17. Fuels planning: science synthesis and integration; social issues fact sheet 16: Prescribed fire and visual quality

    Science.gov (United States)

    Christine Esposito

    2006-01-01

    Research shows that, while prescribed burning and other fuels treatments can lower visual quality in some situations, they can also improve it in others. This fact sheet reviews the visual aspects of different levels of prescribed burning.Other publications in this series...

  18. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    Science.gov (United States)

    Akristiniy, Vera A.; Dikova, Elena A.

    2018-03-01

    The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.

  19. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    Directory of Open Access Journals (Sweden)

    Akristiniy Vera A.

    2018-01-01

    Full Text Available The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.

  20. Visual and kinesthetic locomotor imagery training integrated with auditory step rhythm for walking performance of patients with chronic stroke.

    Science.gov (United States)

    Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk

    2011-02-01

    To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.

  1. Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

    Science.gov (United States)

    Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo

    2011-01-01

    In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423

  2. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  3. Information processing in the primate visual system - An integrated systems perspective

    Science.gov (United States)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  4. Information Processing in the Primate Visual System: An Integrated Systems Perspective

    Science.gov (United States)

    van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  5. Corridor One: An Integrated Distance Visualization Environment for SSI and ASCI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Rick [ANL, PI; Leigh, Jason [UIC, PI

    2002-07-14

    Scenarios describe realistic uses of DVC/Distance technologies in several years. Four scenarios are described: Distributed Decision Making; Remote Interactive Computing; Remote Visualization: (a) Remote Immersive Visualization and (b) Remote Scientific Visualization; Remote Virtual Prototyping. Scenarios serve as drivers for the road maps and enable us to check that the functionality and technology in the road maps match application needs. There are four major DVC/Distance technology areas we cover: Networking and QoS; Remote Computing; Remote Visualization; Remote Data. Each road ma consists of two parts, a functionality matrix (what can be done) and a technology matrix (underlying technology). That is, functionality matrices show the desired operational characteristics, while technology matrices show the underlying technology needed. In practice, there isn't always a clean break between functionality and technology, but it still seems useful to try and separate things this way.

  6. Integration of Notification with 3D Visualization of Rover Operations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D visualization has proven effective at orienting remote ground controllers about robots operating on a planetary surface. Using such displays, controllers can...

  7. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  8. The Integration of Multi-State Clarus Data into Data Visualization Tools

    Science.gov (United States)

    2011-12-20

    This project focused on the integration of all Clarus Data into the Regional Integrated Transportation Information System (RITIS) for real-time situational awareness and historical safety data analysis. The initial outcomes of this project are the fu...

  9. Compromised Integrity of Central Visual Pathways in Patients With Macular Degeneration.

    Science.gov (United States)

    Malania, Maka; Konrad, Julia; Jägle, Herbert; Werner, John S; Greenlee, Mark W

    2017-06-01

    Macular degeneration (MD) affects the central retina and leads to gradual loss of foveal vision. Although, photoreceptors are primarily affected in MD, the retinal nerve fiber layer (RNFL) and central visual pathways may also be altered subsequent to photoreceptor degeneration. Here we investigate whether retinal damage caused by MD alters microstructural properties of visual pathways using diffusion-weighted magnetic resonance imaging. Six MD patients and six healthy control subjects participated in the study. Retinal images were obtained by spectral-domain optical coherence tomography (SD-OCT). Diffusion tensor images (DTI) and high-resolution T1-weighted structural images were collected for each subject. We used diffusion-based tensor modeling and probabilistic fiber tractography to identify the optic tract (OT) and optic radiations (OR), as well as nonvisual pathways (corticospinal tract and anterior fibers of corpus callosum). Fractional anisotropy (FA) and axial and radial diffusivity values (AD, RD) were calculated along the nonvisual and visual pathways. Measurement of RNFL thickness reveals that the temporal circumpapillary retinal nerve fiber layer was significantly thinner in eyes with macular degeneration than normal. While we did not find significant differences in diffusion properties in nonvisual pathways, patients showed significant changes in diffusion scalars (FA, RD, and AD) both in OT and OR. The results indicate that the RNFL and the white matter of the visual pathways are significantly altered in MD patients. Damage to the photoreceptors in MD leads to atrophy of the ganglion cell axons and to corresponding changes in microstructural properties of central visual pathways.

  10. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 2 : knowledge modeling and database development.

    Science.gov (United States)

    2009-12-01

    The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge Engineers at the : state and local level from several aspects that were documented in Volume One, Summary Report. The followi...

  12. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 1 : outreach and commercialization of IRSV prototype.

    Science.gov (United States)

    2012-03-01

    The Integrated Remote Sensing and Visualization System (IRSV) was developed in Phase One of this project in order to : accommodate the needs of todays Bridge Engineers at the state and local level. Overall goals of this project are: : Better u...

  13. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 3 : use of scanning LiDAR in structural evaluation of bridges.

    Science.gov (United States)

    2009-12-01

    This volume introduces several applications of remote bridge inspection technologies studied in : this Integrated Remote Sensing and Visualization (IRSV) study using ground-based LiDAR : systems. In particular, the application of terrestrial LiDAR fo...

  14. Pedagogical Praxis Surrounding the Integration of Photography, Visual Literacy, Digital Literacy, and Educational Technology into Business Education Classrooms: A Focus Group Study

    Science.gov (United States)

    Schlosser, Peter Allen

    2010-01-01

    This paper reports on an investigation into how Marketing and Business Education Teachers utilize and integrate educational technology into curriculum through the use of photography. The ontology of this visual, technological, and language interface is explored with an eye toward visual literacy, digital literacy, and pedagogical praxis, focusing…

  15. Proscription supports robust perceptual integration by suppression in human visual cortex.

    Science.gov (United States)

    Rideaux, Reuben; Welchman, Andrew E

    2018-04-17

    Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide 'what not' information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of 'what not' sensors in supporting sensory estimation.

  16. How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs

    Science.gov (United States)

    Baker, Lottie

    2015-01-01

    Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…

  17. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. The Visual Impairment Intracranial Pressure Syndrome in Long Duration NASA Astronauts: An Integrated Approach

    Science.gov (United States)

    Otto, C. A.; Norsk, P.; Shelhamer, M. J.; Davis, J. R.

    2015-01-01

    The Visual Impairment Intracranial Pressure (VIIP) syndrome is currently NASA's number one human space flight risk. The syndrome, which is related to microgravity exposure, manifests with changes in visual acuity (hyperopic shifts, scotomas), changes in eye structure (optic disc edema, choroidal folds, cotton wool spots, globe flattening, and distended optic nerve sheaths). In some cases, elevated cerebrospinal fluid pressure has been documented postflight reflecting increased intracranial pressure (ICP). While the eye appears to be the main affected end organ of this syndrome, the ocular affects are thought to be related to the effect of cephalad fluid shift on the vascular system and the central nervous system. The leading hypotheses for the development of VIIP involve microgravity induced head-ward fluid shifts along with a loss of gravity-assisted drainage of venous blood from the brain, both leading to cephalic congestion and increased ICP. Although not all crewmembers have manifested clinical signs or symptoms of the VIIP syndrome, it is assumed that all astronauts exposed to microgravity have some degree of ICP elevation in-flight. Prolonged elevations of ICP can cause long-term reduced visual acuity and loss of peripheral visual fields, and has been reported to cause mild cognitive impairment in the analog terrestrial population of Idiopathic Intracranial Hypertension (IIH). These potentially irreversible health consequences underscore the importance of identifying the factors that lead to this syndrome and mitigating them.

  19. An Integrated Theory of Attention and Decision Making in Visual Signal Detection

    Science.gov (United States)

    Smith, Philip L.; Ratcliff, Roger

    2009-01-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in…

  20. Integrated visualization of multi-angle bioluminescence imaging and micro CT

    NARCIS (Netherlands)

    Kok, P.; Dijkstra, J.; Botha, C.P.; Post, F.H.; Kaijzel, E.; Que, I.; Löwik, C.W.G.M.; Reiber, J.H.C.; Lelieveldt, B.P.F.

    2007-01-01

    This paper explores new methods to visualize and fuse multi-2D bioluminescence imaging (BLI) data with structural imaging modalities such as micro CT and MR. A geometric, back-projection-based 3D reconstruction for superficial lesions from multi-2D BLI data is presented, enabling a coarse estimate

  1. Knowledge Visualizations: A Tool to Achieve Optimized Operational Decision Making and Data Integration

    Science.gov (United States)

    2015-06-01

    based upon a pyramid of feedback loops, FFIRs, or PIRs. Reports, in response to FFIRs and PIRs, forward information up the chain of command as a...Communication, The American University, Cairo, Egypt . Keim, D. A., Mansmann, F., Schneidewind, J., & Ziegler, H. (2006). Challenges in visual data

  2. An Integrated Visualization and Basic Molecular Modeling Laboratory for First-Year Undergraduate Medicinal Chemistry

    Science.gov (United States)

    Hayes, Joseph M.

    2014-01-01

    A 3D model visualization and basic molecular modeling laboratory suitable for first-year undergraduates studying introductory medicinal chemistry is presented. The 2 h practical is embedded within a series of lectures on drug design, target-drug interactions, enzymes, receptors, nucleic acids, and basic pharmacokinetics. Serving as a teaching aid…

  3. GenomeCAT: a versatile tool for the analysis and integrative visualization of DNA copy number variants.

    Science.gov (United States)

    Tebel, Katrin; Boldt, Vivien; Steininger, Anne; Port, Matthias; Ebert, Grit; Ullmann, Reinhard

    2017-01-06

    The analysis of DNA copy number variants (CNV) has increasing impact in the field of genetic diagnostics and research. However, the interpretation of CNV data derived from high resolution array CGH or NGS platforms is complicated by the considerable variability of the human genome. Therefore, tools for multidimensional data analysis and comparison of patient cohorts are needed to assist in the discrimination of clinically relevant CNVs from others. We developed GenomeCAT, a standalone Java application for the analysis and integrative visualization of CNVs. GenomeCAT is composed of three modules dedicated to the inspection of single cases, comparative analysis of multidimensional data and group comparisons aiming at the identification of recurrent aberrations in patients sharing the same phenotype, respectively. Its flexible import options ease the comparative analysis of own results derived from microarray or NGS platforms with data from literature or public depositories. Multidimensional data obtained from different experiment types can be merged into a common data matrix to enable common visualization and analysis. All results are stored in the integrated MySQL database, but can also be exported as tab delimited files for further statistical calculations in external programs. GenomeCAT offers a broad spectrum of visualization and analysis tools that assist in the evaluation of CNVs in the context of other experiment data and annotations. The use of GenomeCAT does not require any specialized computer skills. The various R packages implemented for data analysis are fully integrated into GenomeCATs graphical user interface and the installation process is supported by a wizard. The flexibility in terms of data import and export in combination with the ability to create a common data matrix makes the program also well suited as an interface between genomic data from heterogeneous sources and external software tools. Due to the modular architecture the functionality of

  4. Individual variation in the propensity for prospective thought is associated with functional integration between visual and retrosplenial cortex.

    Science.gov (United States)

    Villena-Gonzalez, Mario; Wang, Hao-Ting; Sormaz, Mladen; Mollo, Giovanna; Margulies, Daniel S; Jefferies, Elizabeth A; Smallwood, Jonathan

    2018-02-01

    It is well recognized that the default mode network (DMN) is involved in states of imagination, although the cognitive processes that this association reflects are not well understood. The DMN includes many regions that function as cortical "hubs", including the posterior cingulate/retrosplenial cortex, anterior temporal lobe and the hippocampus. This suggests that the role of the DMN in cognition may reflect a process of cortical integration. In the current study we tested whether functional connectivity from uni-modal regions of cortex into the DMN is linked to features of imaginative thought. We found that strong intrinsic communication between visual and retrosplenial cortex was correlated with the degree of social thoughts about the future. Using an independent dataset, we show that the same region of retrosplenial cortex is functionally coupled to regions of primary visual cortex as well as core regions that make up the DMN. Finally, we compared the functional connectivity of the retrosplenial cortex, with a region of medial prefrontal cortex implicated in the integration of information from regions of the temporal lobe associated with future thought in a prior study. This analysis shows that the retrosplenial cortex is preferentially coupled to medial occipital, temporal lobe regions and the angular gyrus, areas linked to episodic memory, scene construction and navigation. In contrast, the medial prefrontal cortex shows preferential connectivity with motor cortex and lateral temporal and prefrontal regions implicated in language, motor processes and working memory. Together these findings suggest that integrating neural information from visual cortex into retrosplenial cortex may be important for imagining the future and may do so by creating a mental scene in which prospective simulations play out. We speculate that the role of the DMN in imagination may emerge from its capacity to bind together distributed representations from across the cortex in a

  5. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    Science.gov (United States)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  6. Rocking Your Writing Program: Integration of Visual Art, Language Arts, & Science

    Science.gov (United States)

    Poldberg, Monique M.,; Trainin, Guy; Andrzejczak, Nancy

    2013-01-01

    This paper explores the integration of art, literacy and science in a second grade classroom, showing how an integrative approach has a positive and lasting influence on student achievement in art, literacy, and science. Ways in which art, science, language arts, and cognition intersect are reviewed. Sample artifacts are presented along with their…

  7. The Impact of Visualization Dashboards on Quality of Care and Clinician Satisfaction: Integrative Literature Review.

    Science.gov (United States)

    Khairat, Saif Sherif; Dukkipati, Aniesha; Lauria, Heather Alico; Bice, Thomas; Travers, Debbie; Carson, Shannon S

    2018-05-31

    Intensive Care Units (ICUs) in the United States admit more than 5.7 million people each year. The ICU level of care helps people with life-threatening illness or injuries and involves close, constant attention by a team of specially-trained health care providers. Delay between condition onset and implementation of necessary interventions can dramatically impact the prognosis of patients with life-threatening diagnoses. Evidence supports a connection between information overload and medical errors. A tool that improves display and retrieval of key clinical information has great potential to benefit patient outcomes. The purpose of this review is to synthesize research on the use of visualization dashboards in health care. The purpose of conducting this literature review is to synthesize previous research on the use of dashboards visualizing electronic health record information for health care providers. A review of the existing literature on this subject can be used to identify gaps in prior research and to inform further research efforts on this topic. Ultimately, this evidence can be used to guide the development, testing, and implementation of a new solution to optimize the visualization of clinical information, reduce clinician cognitive overload, and improve patient outcomes. Articles were included if they addressed the development, testing, implementation, or use of a visualization dashboard solution in a health care setting. An initial search was conducted of literature on dashboards only in the intensive care unit setting, but there were not many articles found that met the inclusion criteria. A secondary follow-up search was conducted to broaden the results to any health care setting. The initial and follow-up searches returned a total of 17 articles that were analyzed for this literature review. Visualization dashboard solutions decrease time spent on data gathering, difficulty of data gathering process, cognitive load, time to task completion, errors

  8. The visual neuroscience of robotic grasping achieving sensorimotor skills through dorsal-ventral stream integration

    CERN Document Server

    Chinellato, Eris

    2016-01-01

    This book presents interdisciplinary research that pursues the mutual enrichment of neuroscience and robotics. Building on experimental work, and on the wealth of literature regarding the two cortical pathways of visual processing - the dorsal and ventral streams - we define and implement, computationally and on a real robot, a functional model of the brain areas involved in vision-based grasping actions. Grasping in robotics is largely an unsolved problem, and we show how the bio-inspired approach is successful in dealing with some fundamental issues of the task. Our robotic system can safely perform grasping actions on different unmodeled objects, denoting especially reliable visual and visuomotor skills. The computational model and the robotic experiments help in validating theories on the mechanisms employed by the brain areas more directly involved in grasping actions. This book offers new insights and research hypotheses regarding such mechanisms, especially for what concerns the interaction between the...

  9. Integration in a nuclear physics experiment of a visualization unit managed by a microprocessor

    International Nuclear Information System (INIS)

    Lefebvre, M.

    1976-01-01

    A microprocessor (Intel 8080) is introduced in the equipment controlling the (e,e'p) experiment that will take place at the linear accelerator operating in the premises of CEA (Orme des Merisiers, Gif-sur-Yvette, France). The purpose of the microprocessor is to handle the visualization tasks that are necessary to have a continuous control of the experiment. By doing so more time and more memory will be left for data processing by the calculator unit. In a forward version of the system, the controlling of the level of helium in the target might also be in charge of the microprocessor. This work is divided into 7 main parts: 1) a presentation of the linear accelerator and its experimental facilities, 2) the Intel 8080 micro-processor and its programming, 3) the implementation of the micro-processor in the electronic system, 4) the management of the memory, 5) data acquisition, 6) the keyboard, and 7) the visualization unit [fr

  10. Heads First: Visual Aftereffects Reveal Hierarchical Integration of Cues to Social Attention.

    Directory of Open Access Journals (Sweden)

    Sarah Cooney

    Full Text Available Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.

  11. Virtual reality devices integration in scientific visualization software in the VtkVRPN framework

    International Nuclear Information System (INIS)

    Journe, G.; Guilbaud, C.

    2005-01-01

    A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)

  12. A Visualization Tool for Integrating Research Results at an Underground Mine

    Science.gov (United States)

    Boltz, S.; Macdonald, B. D.; Orr, T.; Johnson, W.; Benton, D. J.

    2016-12-01

    Researchers with the National Institute for Occupational Safety and Health are conducting research at a deep, underground metal mine in Idaho to develop improvements in ground control technologies that reduce the effects of dynamic loading on mine workings, thereby decreasing the risk to miners. This research is multifaceted and includes: photogrammetry, microseismic monitoring, geotechnical instrumentation, and numerical modeling. When managing research involving such a wide range of data, understanding how the data relate to each other and to the mining activity quickly becomes a daunting task. In an effort to combine this diverse research data into a single, easy-to-use system, a three-dimensional visualization tool was developed. The tool was created using the Unity3d video gaming engine and includes the mine development entries, production stopes, important geologic structures, and user-input research data. The tool provides the user with a first-person, interactive experience where they are able to walk through the mine as well as navigate the rock mass surrounding the mine to view and interpret the imported data in the context of the mine and as a function of time. The tool was developed using data from a single mine; however, it is intended to be a generic tool that can be easily extended to other mines. For example, a similar visualization tool is being developed for an underground coal mine in Colorado. The ultimate goal is for NIOSH researchers and mine personnel to be able to use the visualization tool to identify trends that may not otherwise be apparent when viewing the data separately. This presentation highlights the features and capabilities of the mine visualization tool and explains how it may be used to more effectively interpret data and reduce the risk of ground fall hazards to underground miners.

  13. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    Science.gov (United States)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  14. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    Science.gov (United States)

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  15. Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization

    Science.gov (United States)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities

  16. "Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology".

    Science.gov (United States)

    Lin, Ying Ling; Guerguerian, Anne-Marie; Tomasi, Jessica; Laussen, Peter; Trbovich, Patricia

    2017-08-14

    Intensive care clinicians use several sources of data in order to inform decision-making. We set out to evaluate a new interactive data integration platform called T3™ made available for pediatric intensive care. Three primary functions are supported: tracking of physiologic signals, displaying trajectory, and triggering decisions, by highlighting data or estimating risk of patient instability. We designed a human factors study to identify interface usability issues, to measure ease of use, and to describe interface features that may enable or hinder clinical tasks. Twenty-two participants, consisting of bedside intensive care physicians, nurses, and respiratory therapists, tested the T3™ interface in a simulation laboratory setting. Twenty tasks were performed with a true-to-setting, fully functional, prototype, populated with physiological and therapeutic intervention patient data. Primary data visualization was time series and secondary visualizations were: 1) shading out-of-target values, 2) mini-trends with exaggerated maxima and minima (sparklines), and 3) bar graph of a 16-parameter indicator. Task completion was video recorded and assessed using a use error rating scale. Usability issues were classified in the context of task and type of clinician. A severity rating scale was used to rate potential clinical impact of usability issues. Time series supported tracking a single parameter but partially supported determining patient trajectory using multiple parameters. Visual pattern overload was observed with multiple parameter data streams. Automated data processing using shading and sparklines was often ignored but the 16-parameter data reduction algorithm, displayed as a persistent bar graph, was visually intuitive. However, by selecting or automatically processing data, triggering aids distorted the raw data that clinicians use regularly. Consequently, clinicians could not rely on new data representations because they did not know how they were

  17. Improving Multisensor Positioning of Land Vehicles with Integrated Visual Odometry for Next-Generation Self-Driving Cars

    Directory of Open Access Journals (Sweden)

    Muhammed Tahsin Rahman

    2018-01-01

    Full Text Available For their complete realization, autonomous vehicles (AVs fundamentally rely on the Global Navigation Satellite System (GNSS to provide positioning and navigation information. However, in area such as urban cores, parking lots, and under dense foliage, which are all commonly frequented by AVs, GNSS signals suffer from blockage, interference, and multipath. These effects cause high levels of errors and long durations of service discontinuity that mar the performance of current systems. The prevalence of vision and low-cost inertial sensors provides an attractive opportunity to further increase the positioning and navigation accuracy in such GNSS-challenged environments. This paper presents enhancements to existing multisensor integration systems utilizing the inertial navigation system (INS to aid in Visual Odometry (VO outlier feature rejection. A scheme called Aided Visual Odometry (AVO is developed and integrated with a high performance mechanization architecture utilizing vehicle motion and orientation sensors. The resulting solution exhibits improved state covariance convergence and navigation accuracy, while reducing computational complexity. Experimental verification of the proposed solution is illustrated through three real road trajectories, over two different land vehicles, and using two low-cost inertial measurement units (IMUs.

  18. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  19. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    Science.gov (United States)

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  20. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state

    Energy Technology Data Exchange (ETDEWEB)

    Baskan, O.; Clercx, H. J. H [Fluid Dynamics Laboratory, Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Speetjens, M. F. M. [Energy Technology Laboratory, Department of Mechanical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Metcalfe, G. [Commonwealth Scientific and Industrial Research Organisation, Melbourne, Victoria 3190 (Australia); Swinburne University of Technology, Department of Mechanical Engineering, Hawthorn VIC 3122 (Australia)

    2015-10-15

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.

  1. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state.

    Science.gov (United States)

    Baskan, O; Speetjens, M F M; Metcalfe, G; Clercx, H J H

    2015-10-01

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.

  2. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms.

    Science.gov (United States)

    Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E

    2018-02-07

    To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Cuttlefish dynamic camouflage: responses to substrate choice and integration of multiple visual cues.

    Science.gov (United States)

    Allen, Justine J; Mäthger, Lydia M; Barbosa, Alexandra; Buresch, Kendra C; Sogin, Emilia; Schwartz, Jillian; Chubb, Charles; Hanlon, Roger T

    2010-04-07

    Prey camouflage is an evolutionary response to predation pressure. Cephalopods have extensive camouflage capabilities and studying them can offer insight into effective camouflage design. Here, we examine whether cuttlefish, Sepia officinalis, show substrate or camouflage pattern preferences. In the first two experiments, cuttlefish were presented with a choice between different artificial substrates or between different natural substrates. First, the ability of cuttlefish to show substrate preference on artificial and natural substrates was established. Next, cuttlefish were offered substrates known to evoke three main camouflage body pattern types these animals show: Uniform or Mottle (function by background matching); or Disruptive. In a third experiment, cuttlefish were presented with conflicting visual cues on their left and right sides to assess their camouflage response. Given a choice between substrates they might encounter in nature, we found no strong substrate preference except when cuttlefish could bury themselves. Additionally, cuttlefish responded to conflicting visual cues with mixed body patterns in both the substrate preference and split substrate experiments. These results suggest that differences in energy costs for different camouflage body patterns may be minor and that pattern mixing and symmetry may play important roles in camouflage.

  4. What you see is what you remember : Visual chunking by temporal integration enhances working memory

    NARCIS (Netherlands)

    Akyürek, Elkan G.; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-01-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the

  5. An integrated approach for visual analysis of a multisource moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; van Hage, W.R.; de Vries, G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  6. An Integrated Approach for Visual Analysis of a Multi-Source Moving Objects Knowledge Base

    NARCIS (Netherlands)

    Willems, C.M.E.; van Hage, W.R.; de Vries, G.K.D.; Janssens, J.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  7. An integrated approach for visual analysis of a multi-source moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; Hage, van W.R.; Vries, de G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  8. Visual Problem Appraisal-Kerela's Coast: A Simulation for Learning about Integrated Coastal Zone Management

    NARCIS (Netherlands)

    Witteveen, L.M.; Enserink, B.

    2007-01-01

    Integrated management of coastal zones is crucial for the sustainable use of scarce and vulnerable natural resources and the economic survival of local and indigenous people. Conflicts of interest in coastal zones are manifold, especially in regions with high population pressure, such as Kerala (in

  9. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    Directory of Open Access Journals (Sweden)

    Jean-Louis eHoneine

    2014-10-01

    Full Text Available Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a subtract or integrate sensory inputs, (b move from allocentric to egocentric reference or vice versa, and (c adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1-2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training

  10. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    Science.gov (United States)

    Honeine, Jean-Louis; Schieppati, Marco

    2014-01-01

    Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices

  11. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    Science.gov (United States)

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  13. Tornado Warning Perception and Response: Integrating the Roles of Visual Design, Demographics, and Hazard Experience.

    Science.gov (United States)

    Schumann, Ronald L; Ash, Kevin D; Bowser, Gregg C

    2018-02-01

    Recent advancements in severe weather detection and warning dissemination technologies have reduced, but not eliminated, large-casualty tornado hazards in the United States. Research on warning cognition and behavioral response by the public has the potential to further reduce tornado-related deaths and injuries; however, less research has been conducted in this area compared to tornado research in the physical sciences. Extant research in this vein tends to bifurcate. One branch of studies derives from classic risk perception, which investigates cognitive, affective, and sociocultural factors in relation to concern and preparation for uncertain risks. Another branch focuses on psychological, social, and cultural factors implicated in warning response for rapid onset hazards, with attention paid to previous experience and message design. Few studies link risk perceptions with cognition and response as elicited by specific examples of warnings. The present study unites risk perception, cognition, and response approaches by testing the contributions of hypothesized warning response drivers in one set of path models. Warning response is approximated by perceived fear and intended protective action as reported by survey respondents when exposed to hypothetical tornado warning scenarios. This study considers the roles of hazard knowledge acquisition, information-seeking behaviors, previous experience, and sociodemographic factors while controlling for the effects of the visual warning graphic. Findings from the study indicate the primacy of a user's visual interpretation of a warning graphic in shaping tornado warning response. Results also suggest that information-seeking habits, previous tornado experience, and local disaster culture play strong influencing roles in warning response. © 2017 Society for Risk Analysis.

  14. Interactive Visualization Systems and Data Integration Methods for Supporting Discovery in Collections of Scientific Information

    Science.gov (United States)

    2011-05-01

    projection for exploratory analysis. It also enables quantitative analysis. We show that this combination can be used to assisting users with the...cited in “Undiscovered Public Knowledge” Swanson succeeds in integrating Wilson’s ideas with Karl Popper’s critique of positivism from the 1934 “Logik...data that was performed in the prior studies. In this study, quantitative properties of the graph were used to identify records that merit

  15. An integrated methodology for process improvement and delivery system visualization at a multidisciplinary cancer center.

    Science.gov (United States)

    Singprasong, Rachanee; Eldabi, Tillal

    2013-01-01

    Multidisciplinary cancer centers require an integrated, collaborative, and stream-lined workflow in order to provide high quality of patient care. Due to the complex nature of cancer care and continuing changes to treatment techniques and technologies, it is a constant struggle for centers to obtain a systemic and holistic view of treatment workflow for improving the delivery systems. Project management techniques, Responsibility matrix and a swim-lane activity diagram representing sequence of activities can be combined for data collection, presentation, and evaluation of the patient care. This paper presents this integrated methodology using multidisciplinary meetings and walking the route approach for data collection, integrated responsibility matrix and swim-lane activity diagram with activity time for data representation and 5-why and gap analysis approach for data analysis. This enables collection of right detail of information in a shorter time frame by identifying process flaws and deficiencies while being independent of the nature of the patient's disease or treatment techniques. A case study of a multidisciplinary regional cancer centre is used to illustrate effectiveness of the proposed methodology and demonstrates that the methodology is simple to understand, allowing for minimal training of staff and rapid implementation. © 2011 National Association for Healthcare Quality.

  16. Enhancing situational awareness by means of visualization and information integration of sensor networks

    Science.gov (United States)

    Timonen, Jussi; Vankka, Jouko

    2013-05-01

    This paper presents a solution for information integration and sharing architecture, which is able to receive data simultaneously from multiple different sensor networks. Creating a Common Operational Picture (COP) object along with the base map of the building plays a key role in the research. The object is combined with desired map sources and then shared to the mobile devices worn by soldiers in the field. The sensor networks we used focus on location techniques indoors, and a simple set of symbols is created to present the information, as an addition to NATO APP6B symbols. A core element in this research is the MUSAS (Mobile Urban Situational Awareness System), a demonstration environment that implements central functionalities. Information integration of the system is handled by the Internet Connection Engine (Ice) middleware, as well as the server, which hosts COP information and maps. The entire system is closed, such that it does not need any external service, and the information transfer with the mobile devices is organized by a tactical 5 GHz WLAN solution. The demonstration environment is implemented using only commercial off-theshelf (COTS) products. We have presented a field experiment event in which the system was able to integrate and share real time information of a blue force tracking system, received signal strength indicator (RSSI) based intrusion detection system, and a robot using simultaneous location and mapping technology (SLAM), where all the inputs were based on real activities. The event was held in a training area on urban area warfare.

  17. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    OpenAIRE

    Chie Takahashi; Simon J Watt

    2011-01-01

    Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009). Variations in tool geometry also affect the reliability (precision) of haptic size estimates, however, because they alter the change ...

  18. Integrated and visual performance evaluation model for thermal systems and its application to an HTGR cogeneration system

    International Nuclear Information System (INIS)

    Qi, Zhang; Yoshikawa, Hidekazu; Ishii, Hirotake; Shimoda, Hiroshi

    2010-01-01

    An integrated and visual model EXCEM-MFM (EXergy, Cost, Energy and Mass - Multilevel Flow Model) has been proposed in this study to comprehensively analyze and evaluate the performances of thermal systems by coupling two models: EXCEM model and MFM. In the EXCEM-MFM model, MFM is used to provide analysis frameworks for exergy, cost, energy and mass four parameters, and EXCEM is used to calculate the flow values of these four parameters for MFM based on the provided framework. In this study, we used the tools and technologies of computer science and software engineering to materialize the model. Moreover, the feasibility and application potential of this proposed EXCEM-MFM model has been demonstrated by the example application of a comprehensive performance study of a typical High Temperature Gas Reactor (HTGR) cogeneration system by taking into account the thermodynamic and economic perspectives. (author)

  19. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    Science.gov (United States)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  20. Visual-vestibular integration as a function of adaptation to space flight and return to Earth

    Science.gov (United States)

    Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan

    1999-01-01

    Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.

  1. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  2. Integration of graphene sensor with electrochromic device on modulus-gradient polymer for instantaneous strain visualization

    Science.gov (United States)

    Yang, Tingting; Zhong, Yujia; Tao, Dashuai; Li, Xinming; Zang, Xiaobei; Lin, Shuyuan; Jiang, Xin; Li, Zhihong; Zhu, Hongwei

    2017-09-01

    In nature, some animals change their deceptive coloration for camouflage, temperature preservation or communication. This astonishing function has inspired scientists to replicate the color changing abilities of animals with artificial skin. Recently, some studies have focused on the smart materials and devices with reversible color changing or light-emitting properties for instantaneous strain visualization. However, most of these works only show eye-detectable appearance change when subjected to large mechanical deformation (100%-500% strain), and conspicuous color change at small strain remains rarely explored. In the present study, we developed a user-interactive electronic skin with human-readable optical output by assembling a highly sensitive resistive strain sensor with a stretchable organic electrochromic device (ECD) together. We explored the substrate effect on the electromechanical behavior of graphene and designed a strategy of modulus-gradient structure to employ graphene as both the highly sensitive strain sensing element and the insensitive stretchable electrode of the ECD layer. Subtle strain (0-10%) was enough to evoke an obvious color change, and the RGB value of the color quantified the magnitude of the applied strain. Such high sensitivity to smaller strains (0-10%) with color changing capability will potentially enhance the function of wearable devices, robots and prosthetics in the future.

  3. Developing Mobile- and BIM-Based Integrated Visual Facility Maintenance Management System

    Directory of Open Access Journals (Sweden)

    Yu-Cheng Lin

    2013-01-01

    Full Text Available Facility maintenance management (FMM has become an important topic for research on the operation phase of the construction life cycle. Managing FMM effectively is extremely difficult owing to various factors and environments. One of the difficulties is the performance of 2D graphics when depicting maintenance service. Building information modeling (BIM uses precise geometry and relevant data to support the maintenance service of facilities depicted in 3D object-oriented CAD. This paper proposes a new and practical methodology with application to FMM using BIM technology. Using BIM technology, this study proposes a BIM-based facility maintenance management (BIMFMM system for maintenance staff in the operation and maintenance phase. The BIMFMM system is then applied in selected case study of a commercial building project in Taiwan to verify the proposed methodology and demonstrate its effectiveness in FMM practice. Using the BIMFMM system, maintenance staff can access and review 3D BIM models for updating related maintenance records in a digital format. Moreover, this study presents a generic system architecture and its implementation. The combined results demonstrate that a BIMFMM-like system can be an effective visual FMM tool.

  4. Active contour-based visual tracking by integrating colors, shapes, and motions.

    Science.gov (United States)

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  5. Attentional selection in visual perception, memory and action: a quest for cross-domain integration

    Science.gov (United States)

    Schneider, Werner X.; Einhäuser, Wolfgang; Horstmann, Gernot

    2013-01-01

    For decades, the cognitive and neural sciences have benefitted greatly from a separation of mind and brain into distinct functional domains. The tremendous success of this approach notwithstanding, it is self-evident that such a view is incomplete. Goal-directed behaviour of an organism requires the joint functioning of perception, memory and sensorimotor control. A prime candidate for achieving integration across these functional domains are attentional processes. Consequently, this Theme Issue brings together studies of attentional selection from many fields, both experimental and theoretical, that are united in their quest to find overreaching integrative principles of attention between perception, memory and action. In all domains, attention is understood as combination of competition and priority control (‘bias’), with the task as a decisive driving factor to ensure coherent goal-directed behaviour and cognition. Using vision as the predominant model system for attentional selection, many studies of this Theme Issue focus special emphasis on eye movements as a selection process that is both a fundamental action and serves a key function in perception. The Theme Issue spans a wide range of methods, from measuring human behaviour in the real word to recordings of single neurons in the non-human primate brain. We firmly believe that combining such a breadth in approaches is necessary not only for attentional selection, but also to take the next decisive step in all of the cognitive and neural sciences: to understand cognition and behaviour beyond isolated domains. PMID:24018715

  6. Attentional selection in visual perception, memory and action: a quest for cross-domain integration.

    Science.gov (United States)

    Schneider, Werner X; Einhäuser, Wolfgang; Horstmann, Gernot

    2013-10-19

    For decades, the cognitive and neural sciences have benefitted greatly from a separation of mind and brain into distinct functional domains. The tremendous success of this approach notwithstanding, it is self-evident that such a view is incomplete. Goal-directed behaviour of an organism requires the joint functioning of perception, memory and sensorimotor control. A prime candidate for achieving integration across these functional domains are attentional processes. Consequently, this Theme Issue brings together studies of attentional selection from many fields, both experimental and theoretical, that are united in their quest to find overreaching integrative principles of attention between perception, memory and action. In all domains, attention is understood as combination of competition and priority control ('bias'), with the task as a decisive driving factor to ensure coherent goal-directed behaviour and cognition. Using vision as the predominant model system for attentional selection, many studies of this Theme Issue focus special emphasis on eye movements as a selection process that is both a fundamental action and serves a key function in perception. The Theme Issue spans a wide range of methods, from measuring human behaviour in the real word to recordings of single neurons in the non-human primate brain. We firmly believe that combining such a breadth in approaches is necessary not only for attentional selection, but also to take the next decisive step in all of the cognitive and neural sciences: to understand cognition and behaviour beyond isolated domains.

  7. Exploratory Nuclear Reactor Safety Analysis and Visualization via Integrated Topological and Geometric Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Maljovec, Dan [Univ. of Utah, Salt Lake City, UT (United States); Wang, Bei [Univ. of Utah, Salt Lake City, UT (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nourgaliev, Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-10-01

    and 2) topology-based methodologies to interactively visualize multidimensional data and extract risk-informed insights. Regarding item 1) we employ learning algorithms that aim to infer/predict simulation outcome and decide the coordinate in the input space of the next sample that maximize the amount of information that can be gained from it. Such methodologies can be used to both explore and exploit the input space. The later one is especially used for safety analysis scopes to focus samples along the limit surface, i.e. the boundaries in the input space between system failure and system success. Regarding item 2) we present a software tool that is designed to analyze multi-dimensional data. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations.

  8. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  9. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  10. Geoscience information integration and visualization research of Shandong Province, China based on ArcGIS engine

    Science.gov (United States)

    Xu, Mingzhu; Gao, Zhiqiang; Ning, Jicai

    2014-10-01

    To improve the access efficiency of geoscience data, efficient data model and storage solutions should be used. Geoscience data is usually classified by format or coordinate system in existing storage solutions. When data is large, it is not conducive to search the geographic features. In this study, a geographical information integration system of Shandong province, China was developed based on the technology of ArcGIS Engine, .NET, and SQL Server. It uses Geodatabase spatial data model and ArcSDE to organize and store spatial and attribute data and establishes geoscience database of Shangdong. Seven function modules were designed: map browse, database and subject management, layer control, map query, spatial analysis and map symbolization. The system's characteristics of can be browsed and managed by geoscience subjects make the system convenient for geographic researchers and decision-making departments to use the data.

  11. Integration and Exploitation of Advanced Visualization and Data Technologies to Teach STEM Subjects

    Science.gov (United States)

    Brandon, M. A.; Garrow, K. H.

    2014-12-01

    We live in an age where the volume of content available online to the general public is staggering. Integration of data from new technologies gives us amazing educational opportunities when appropriate narratives are provided. We prepared a distance learning credit bearing module that showcased many currently available data sets and state of the art technologies. It has been completed by many thousands of students with good feedback. Module highlights were the wide ranging and varied online activities which taught a wide range of STEM content. For example: it is well known that on Captain Scott's Terra Nova Expedition 1910-13, three researchers completed the "the worst journey in the world" to study emperor penguins. Using their primary records and clips from location filmed television documentaries we can tell their story and the reasons why it was important. However using state of the art content we can go much further. Using satellite data students can trace the path the researchers took and observe the penguin colony that they studied. Linking to modern Open Access literature students learn how they can estimate the numbers of animals in this and similar locations. Then by linking to freely available data from Antarctic Automatic Weather Stations students can learn quantitatively about the climatic conditions the animals are enduring in real time. They can then download and compare this with the regional climatic record to see if their observations are what could be expected. By considering the environment the penguins live in students can be taught about the evolutionary and behavioural adaptations the animals have undergone to survive. In this one activity we can teach a wide range of key learning points in an engaging and coherent way. It opened some students' eyes to the range of possibilities available to learn about our, and other planets. The addition and integration of new state of the art techniques and data sets only increases the opportunities to

  12. Visual Education

    DEFF Research Database (Denmark)

    Buhl, Mie; Flensborg, Ingelise

    2010-01-01

    The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating the functi......The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating...... to emerge in the interlocutory space of a global visual repertoire and diverse local interpretations. The two perspectives represent challenges for future visual education which require visual competences, not only within the arts but also within the subjects of natural sciences, social sciences, languages...

  13. Integration of simulations and visualizations into classroom contexts through role playing

    Science.gov (United States)

    Moysey, S. M.

    2016-12-01

    While simulations create a novel way to engage students, the idea of numerical modeling may be overwhelming to a wide swath of students - particularly non-geoscience majors or those students early in their earth science education. Yet even for these students, simulations and visualizations remain a powerful way to explore concepts and take ownership over their learning. One approach to bring these tools into the classroom is to introduce them as a component of a larger role-playing activity. I present two specific examples of how I have done this within a general education course broadly focused on water resources sustainability. In the first example, we have created an online multi-player watershed management game where players make management decisions for their individual farms, which in turn set the parameters for a watershed-scale groundwater model that continuously runs in the background. Through the simulation students were able to influence the behavior of the environment and see feedbacks on their individual land within the game. Though the original intent was to focus student learning on the hydrologic aspects of the watershed behavior, I have found that the value of the simulation is actually in allowing students to become immersed in a way that enables deep conversations about topics ranging from environmental policy to social justice. The second example presents an overview of a role playing activity focused on a multi-party negotiation of water rights in the Klamath watershed. In this case each student takes on a different role in the negotiation (e.g., farmer, energy producer, government, environmental advocate, etc.) and is presented with a rich set of data tying environmental and economic factors to the operation of reservoirs. In this case the simulation model is very simple, i.e., a mass balance calculator that students use to predict the consequences of their management decisions. The simplicity of the simulator, however, allows for

  14. DIGITALIZATION CULTURE VS ARCHAEOLOGICAL VISUALIZATION: INTEGRATION OF PIPELINES AND OPEN ISSUES

    Directory of Open Access Journals (Sweden)

    L. Cipriani

    2017-02-01

    Full Text Available Scholars with different backgrounds have carried out extensive surveys centred on how 3D digital models, data acquisition and processing have changed over the years in fields of archaeology and architecture and more in general in the Cultural Heritage panorama: the current framework focused on reality-based modelling is then split in several branches: acquisition, communication and analysis of buildings (Pintus et alii, 2014. Despite the wide set of well-structured and all-encompassing surveys on the IT application in Cultural Heritage, several open issues still seem to be present, in particular once the purpose of digital simulacra is the one to fit with the “pre-informatics" legacy of architectural/archaeological representation (historical drawings with their graphic codes and aesthetics. Starting from a series of heterogeneous matters that came up studying two Italian UNESCO sites, this paper aims at underlining the importance of integrating different pipelines from different technological fields, in order to achieve multipurpose models, capable to comply with graphic codes of traditional survey, as well as semantic enrichment, and last but not least, data compression/portability and texture reliability under different lighting simulation.

  15. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    Science.gov (United States)

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  16. Integrating Patient-Reported Outcomes into Spine Surgical Care through Visual Dashboards: Lessons Learned from Human-Centered Design.

    Science.gov (United States)

    Hartzler, Andrea L; Chaudhuri, Shomir; Fey, Brett C; Flum, David R; Lavallee, Danielle

    2015-01-01

    The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients-physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient

  17. Integrating Patient-Reported Outcomes into Spine Surgical Care through Visual Dashboards: Lessons Learned from Human-Centered Design

    Science.gov (United States)

    Hartzler, Andrea L.; Chaudhuri, Shomir; Fey, Brett C.; Flum, David R.; Lavallee, Danielle

    2015-01-01

    Introduction: The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients—physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). Methods: We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Findings: Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Conclusion: Our work illustrates a range of engagement methods guided by human-centered principles and design

  18. Immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention: a mismatch negativity study.

    Science.gov (United States)

    Li, X; Yang, Y; Ren, G

    2009-06-16

    Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.

  19. Psychological Adjustment and Levels of Self Esteem in Children with Visual-Motor Integration Difficulties Influences the Results of a Randomized Intervention Trial

    Science.gov (United States)

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z.

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low…

  20. Investigating the Visual-Motor Integration Skills of 60-72-Month-Old Children at High and Low Socio-Economic Status as Regard the Age Factor

    Science.gov (United States)

    Ercan, Zülfiye Gül; Ahmetoglu, Emine; Aral, Neriman

    2011-01-01

    This study aims to define whether age creates any differences in the visual-motor integration skills of 60-72 months old children at low and high socio-economic status. The study was conducted on a total of 148 children consisting of 78 children representing low socio-economic status and 70 children representing high socio-economic status in the…

  1. Visual Motor Integration as a Screener for Responders and Non-Responders in Preschool and Early School Years: Implications for Inclusive Assessment in Oman

    Science.gov (United States)

    Emam, Mahmoud Mohamed; Kazem, Ali Mahdi

    2016-01-01

    Visual motor integration (VMI) is the ability of the eyes and hands to work together in smooth, efficient patterns. In Oman, there are few effective methods to assess VMI skills in children in inclusive settings. The current study investigated the performance of preschool and early school years responders and non-responders on a VMI test. The full…

  2. Structural Model of the Relationships among Cognitive Processes, Visual Motor Integration, and Academic Achievement in Students with Mild Intellectual Disability (MID)

    Science.gov (United States)

    Taha, Mohamed Mostafa

    2016-01-01

    This study aimed to test a proposed structural model of the relationships and existing paths among cognitive processes (attention and planning), visual motor integration, and academic achievement in reading, writing, and mathematics. The study sample consisted of 50 students with mild intellectual disability or MID. The average age of these…

  3. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  4. AtlantOS WP2, Enhancement of ship-based observing networks - Bathymetric integration and visualization of Europe's data holdings

    Science.gov (United States)

    Wölfl, Anne-Cathrin; Devey, Colin; Augustin, Nico

    2017-04-01

    The European Horizon 2020 research and innovation project AtlantOS - Optimising and Enhancing the Integrated Atlantic Ocean Observing Systems - aims to improve the present-day ocean observing activities in the Atlantic Ocean by establishing a sustainable, efficient and integrated Atlantic Ocean Observing System. 62 partners from 18 countries are working on solutions I) to improve international collaboration in the design, implementation and benefit sharing of ocean observing, II) to promote engagement and innovation in all aspects of ocean observing, III) to facilitate free and open access to ocean data and information, IV) to enable and disseminate methods of achieving quality and authority of ocean information, V) to strengthen the Global Ocean Observing System (GOOS) and to sustain observing systems that are critical for the Copernicus Marine Environment Monitoring Service and its applications and VI) to contribute to the aims of the Galway Statement on Atlantic Ocean Cooperation. The Work Package 2 of the AtlantOS project focuses on improving, expanding, integrating and innovating ship-based observations. One of the tasks is the provision of Europe's existing and future bathymetric data sets from the Atlantic Ocean in accessible formats enabling easy processing and visualization for stakeholders. Furthermore, a new concept has recently been implemented, where three large German research vessels continuously collect bathymetric data during their transits. All data sets are gathered and processed with the help of national data centers and partner institutions and integrated into existing open access data systems, such as Pangaea in Germany, EMODnet at European level and GMRT (Global Multi-Resolution Topography synthesis) at international level. The processed data will be linked to the original data holdings, that can easily be accessed if required. The overall aim of this task is to make bathymetric data publicly available for specialists and non-specialists both

  5. No double-dissociation between optic ataxia and visual agnosia: Multiple sub-streams for multiple visuo-manual integrations

    NARCIS (Netherlands)

    Pisella, L.; Binkofski, F.; Lasek, K.; Toni, I.; Rossetti, Y.

    2006-01-01

    The current dominant view of the visual system is marked by the functional and anatomical dissociation between a ventral stream specialised for perception and a dorsal stream specialised for action. The "double-dissociation" between visual agnosia (VA), a deficit of visual recognition, and optic

  6. Fuels planning: science synthesis and integration; forest structure and fire hazard fact sheet 03: visualizing forest structure and fuels

    Science.gov (United States)

    Rocky Mountain Research Station USDA Forest Service

    2004-01-01

    The software described in this fact sheet provides managers with tools for visualizing forest and fuels information. Computer-based landscape simulations can help visualize stand and landscape conditions and the effects of different management treatments and fuel changes over time. These visualizations can assist forest planning by considering a range of management...

  7. Role of high-resolution image integration to visualize left phrenic nerve and coronary arteries during epicardial ventricular tachycardia ablation.

    Science.gov (United States)

    Yamashita, Seigo; Sacher, Frédéric; Mahida, Saagar; Berte, Benjamin; Lim, Han S; Komatsu, Yuki; Amraoui, Sana; Denis, Arnaud; Derval, Nicolas; Laurent, François; Montaudon, Michel; Hocini, Mélèze; Haïssaguerre, Michel; Jaïs, Pierre; Cochet, Hubert

    2015-04-01

    Epicardial ventricular tachycardia (VT) ablation is associated with risks of coronary artery (CA) and phrenic nerve (PN) injury. We investigated the role of multidetector computed tomography in visualizing CA and PN during VT ablation. Ninety-five consecutive patients (86 men; age, 57 ± 15) with VT underwent cardiac multidetector computed tomography. The PN detection rate and anatomic variability were analyzed. In 49 patients undergoing epicardial mapping, real-time multidetector computed tomographic integration was used to display CAs/PN locations in 3-dimensional mapping systems. Elimination of local abnormal ventricular activities (LAVAs) was used as ablation end point. The distribution of CAs/PN with respect to LAVA was analyzed and compared between VT etiologies. Multidetector computed tomography detected PN in 81 patients (85%). Epicardial LAVAs were observed in 44 of 49 patients (15 ischemic cardiomyopathy, 15 nonischemic cardiomyopathy, and 14 arrhythmogenic right ventricular cardiomyopathy) with a mean of 35 ± 37 LAVA points/patient. LAVAs were located within 1 cm from CAs and PN in 35 (80%) and 18 (37%) patients, respectively. The prevalence of LAVA adjacent to CAs was higher in nonischemic cardiomyopathy and arrhythmogenic right ventricular cardiomyopathy than in ischemic cardiomyopathy (100% versus 86% versus 53%; P < 0.01). The prevalence of LAVAs adjacent to PN was higher in nonischemic cardiomyopathy than in ischemic cardiomyopathy (93% versus 27%; P < 0.001). Epicardial ablation was performed in 37 patients (76%). Epicardial LAVAs could not be eliminated because of the proximity to CAs or PN in 8 patients (18%). The epicardial electrophysiological VT substrate is often close to CAs and PN in patients with nonischemic cardiomyopathy. High-resolution image integration is potentially useful to minimize risks of PN and CA injury during epicardial VT ablation. © 2015 American Heart Association, Inc.

  8. Influence of visual control, conduction, and central integration on static and dynamic balance in healthy older adults.

    Science.gov (United States)

    Perrin, P P; Jeandel, C; Perrin, C A; Béné, M C

    1997-01-01

    Aging is associated with decreased balance abilities, resulting in an increased risk of fall. In order to appreciate the visual, somatosensory, and central signals involved in balance control, sophisticated methods of posturography assessment have been developed, using static and dynamic tests, eventually associated with electromyographic measurements. We applied such methods to a population of healthy older adults in order to appreciate the respective importance of each of these sensorial inputs in aging individuals. Posture control parameters were recorded on a force-measuring platform in 41 healthy young (age 28.5 +/- 5.9 years) and 50 older (age 69.8 +/- 5.9 years) adults, using a static test and two dynamic tests performed by all individuals first with eyes open, then with eyes closed. The distance covered by the center of foot pressure, sway area, and anteroposterior oscillations were significantly higher, with eyes open or closed, in older people than in young subjects. Significant differences were noted in dynamic tests with longer latency responses in the group of old people. Dynamic recordings in a sinusoidal test had a more regular pattern when performed eyes open in both groups and evidenced significantly greater instability in old people. These data suggest that vision remains important in maintaining postural control while conduction and central integration become less efficient with age.

  9. The effect of visual scanning exercises integrated into physiotherapy in patients with unilateral spatial neglect poststroke: a matched-pair randomized control trial.

    Science.gov (United States)

    van Wyk, Andoret; Eksteen, Carina A; Rheeder, Paul

    2014-01-01

    Unilateral spatial neglect (USN) is a visual-perceptual disorder that entails the inability to perceive and integrate stimuli on one side of the body, resulting in the neglect of one side of the body. Stroke patients with USN present with extensive functional disability and duration of therapy input. To determine the effect of saccadic eye movement training with visual scanning exercises (VSEs) integrated with task-specific activities on USN poststroke. A matched-pair randomized control trial was conducted. Subjects were matched according to their functional activity level and allocated to either a control (n = 12) or an experimental group (n = 12). All patients received task-specific activities for a 4-week intervention period. The experimental group received saccadic eye movement training with VSE integrated with task specific activities as an "add on" intervention. Assessments were conducted weekly over the intervention period. Statistical significant difference was noted on the King-Devick Test (P = .021), Star Cancellation Test (P = .016), and Barthel Index (P = .004). Intensive saccadic eye movement training with VSE integrated with task-specific activities has a significant effect on USN in patients poststroke. Results of this study are supported by findings from previously reviewed literature in the sense that the effect of saccadic eye movement training with VSE as an intervention approach has a significant effect on the visual perceptual processing of participants with USN poststroke. The significant improved visual perceptual processing translate to significantly better visual function and ability to perform activities of daily living following the stroke. © The Author(s) 2014.

  10. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  11. Interactive balance training integrating sensor-based visual feedback of movement performance: a pilot study in older adults.

    Science.gov (United States)

    Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan

    2014-12-13

    Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n = 17) or control group (CG, n = 16). The IG underwent 4 weeks (twice a week) of balance training including weight shifting and virtual obstacle crossing tasks with visual/auditory real-time joint movement feedback using wearable sensors. The CG received no intervention. Outcome measures included changes in center of mass (CoM) sway, ankle and hip joint sway measured during eyes open (EO) and eyes closed (EC) balance test at baseline and post-intervention. Ankle-hip postural coordination was quantified by a reciprocal compensatory index (RCI). Physical performance was quantified by the Alternate-Step-Test (AST), Timed-up-and-go (TUG), and gait assessment. User experience was measured by a standardized questionnaire. After the intervention sway of CoM, hip, and ankle were reduced in the IG compared to the CG during both EO and EC condition (p = .007-.042). Improvement was obtained for AST (p = .037), TUG (p = .024), fast gait speed (p = . 010), but not normal gait speed (p = .264). Effect sizes were moderate for all outcomes. RCI did not change significantly. Users expressed a positive training experience including fun, safety, and helpfulness of sensor-feedback. Results of this proof-of-concept study suggest that older adults at risk of falling can benefit from the balance training program. Study findings may help to inform future exercise interventions integrating wearable sensors for guided game-based training in home- and community environments. Future studies should evaluate the

  12. Visualization of uncertainty and ensemble data: Exploration of climate modeling and weather forecast data with integrated ViSUS-CDAT systems

    International Nuclear Information System (INIS)

    Potter, Kristin; Pascucci, Valerio; Johhson, Chris; Wilson, Andrew; Bremer, Peer-Timo; Williams, Dean; Doutriaux, Charles

    2009-01-01

    Climate scientists and meteorologists are working towards a better understanding of atmospheric conditions and global climate change. To explore the relationships present in numerical predictions of the atmosphere, ensemble datasets are produced that combine time- and spatially-varying simulations generated using multiple numeric models, sampled input conditions, and perturbed parameters. These data sets mitigate as well as describe the uncertainty present in the data by providing insight into the effects of parameter perturbation, sensitivity to initial conditions, and inconsistencies in model outcomes. As such, massive amounts of data are produced, creating challenges both in data analysis and in visualization. This work presents an approach to understanding ensembles by using a collection of statistical descriptors to summarize the data, and displaying these descriptors using variety of visualization techniques which are familiar to domain experts. The resulting techniques are integrated into the ViSUS/Climate Data and Analysis Tools (CDAT) system designed to provide a directly accessible, complex visualization framework to atmospheric researchers.

  13. Visual-haptic integration with pliers and tongs: signal ‘weights’ take account of changes in haptic sensitivity caused by different tools

    Directory of Open Access Journals (Sweden)

    Chie eTakahashi

    2014-02-01

    Full Text Available When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the ‘weight’ given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots with different ‘gains’ between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber’s law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modelled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimising the

  14. Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    NARCIS (Netherlands)

    Vlaming, Luc; Collins, Christopher; Hancock, Mark; Nacenta, Miguel; Isenberg, Tobias; Carpendale, Sheelagh

    2010-01-01

    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch

  15. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    Science.gov (United States)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  16. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    Directory of Open Access Journals (Sweden)

    Chie Takahashi

    2011-10-01

    Full Text Available Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009. Variations in tool geometry also affect the reliability (precision of haptic size estimates, however, because they alter the change in hand opening caused by a given change in object size. Here, we examine whether the brain appropriately adjusts the weights given to visual and haptic size signals when tool geometry changes. We first estimated each cue's reliability by measuring size-discrimination thresholds in vision-alone and haptics-alone conditions. We varied haptic reliability using tools with different object-size:hand-opening ratios (1:1, 0.7:1, and 1.4:1. We then measured the weights given to vision and haptics with each tool, using a cue-conflict paradigm. The weight given to haptics varied with tool type in a manner that was well predicted by the single-cue reliabilities (MLE model; Ernst and Banks, 2002. This suggests that the process of visual-haptic integration appropriately accounts for variations in haptic reliability introduced by different tool geometries.

  17. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    Science.gov (United States)

    Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.

    2017-10-01

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  18. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, A. L. [Fermilab; Kowalkowski, J. B. [Fermilab; Jones, C. D. [Fermilab

    2017-11-22

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  19. Integration of spectral domain optical coherence tomography with microperimetry generates unique datasets for the simultaneous identification of visual function and retinal structure in ophthalmological applications

    Science.gov (United States)

    Koulen, Peter; Gallimore, Gary; Vincent, Ryan D.; Sabates, Nelson R.; Sabates, Felix N.

    2011-06-01

    Conventional perimeters are used routinely in various eye disease states to evaluate the central visual field and to quantitatively map sensitivity. However, standard automated perimetry proves difficult for retina and specifically macular disease due to the need for central and steady fixation. Advances in instrumentation have led to microperimetry, which incorporates eye tracking for placement of macular sensitivity values onto an image of the macular fundus thus enabling a precise functional and anatomical mapping of the central visual field. Functional sensitivity of the retina can be compared with the observed structural parameters that are acquired with high-resolution spectral domain optical coherence tomography and by integration of scanning laser ophthalmoscope-driven imaging. Findings of the present study generate a basis for age-matched comparison of sensitivity values in patients with macular pathology. Microperimetry registered with detailed structural data performed before and after intervention treatments provides valuable information about macular function, disease progression and treatment success. This approach also allows for the detection of disease or treatment related changes in retinal sensitivity when visual acuity is not affected and can drive the decision making process in choosing different treatment regimens and guiding visual rehabilitation. This has immediate relevance for applications in central retinal vein occlusion, central serous choroidopathy, age-related macular degeneration, familial macular dystrophy and several other forms of retina related visual disability.

  20. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    Science.gov (United States)

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Tagging effects of passive integrated transponder and visual implant elastomer on the small-bodied white sands pupfish (Cyprinodon tularosa)

    Science.gov (United States)

    Peterson, Damon; Trantham, Randi B.; Trantham, Tulley G.; Caldwell, Colleen A.

    2018-01-01

    One of the greatest limiting factors of studies designed to obtain growth, movement, and survival in small-bodied fishes is the selection of a viable tag. The tag must be relatively small with respect to body size as to impart minimal sub-lethal effects on growth and mobility, as well as be retained throughout the life of the fish or duration of the study. Thus, body size of the model species becomes a major limiting factor; yet few studies have obtained empirical evidence of the minimum fish size and related tagging effects. The probability of surviving a tagging event was quantified in White Sands pupfish (Cyprinodon tularosa) across a range of sizes (19–60 mm) to address the hypothesis that body size predicts tagging survival. We compared tagging related mortality, individual taggers, growth, and tag retention in White Sands pupfish implanted with 8-mm passive integrated transponder (PIT), visual implant elastomer (VIE), and control (handled similarly, but no tag implantation) over a 75 d period. Initial body weight was a good predictor of the probability of survival in PIT- and VIE-tagged fish. As weight increased by 1 g, the fish were 4.73 times more likely to survive PIT-tag implantation compared to the control fish with an estimated suitable tagging size at 1.1 g (TL: 39.29 ± 0.41 mm). Likewise, VIE-tagged animals were 2.27 times more likely to survive a tagging event compared to the control group for every additional 1 g with an estimated size suitable for tagging of 0.9 g (TL: 36.9 ± 0.36 mm) fish. Growth rates of PIT- and VIE-tagged White Sands pupfish were similar to the control groups. This research validated two popular tagging methodologies in the White Sands pupfish, thus providing a valuable tool for characterizing vital rates in other small-bodied fishes.

  2. Two visual targets for the price of one? : Pupil dilation shows reduced mental effort through temporal integration

    NARCIS (Netherlands)

    Wolff, Michael J; Scholz, Sabine; Akyürek, Elkan G; van Rijn, Hedderik

    In dynamic sensory environments, successive stimuli may be combined perceptually and represented as a single, comprehensive event by means of temporal integration. Such perceptual segmentation across time is intuitively plausible. However, the possible costs and benefits of temporal integration in

  3. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  4. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability.

    Science.gov (United States)

    Rengier, Fabian; Häfner, Matthias F; Unterhinninghofen, Roland; Nawrotzki, Ralph; Kirsch, Joachim; Kauczor, Hans-Ulrich; Giesel, Frederik L

    2013-08-01

    Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students' deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. The total number of correctly answered questions improved from 36.9±4.8 to 49.5±5.4 (pability by 11.3% (psoftware into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby even diagnostic skills for imaging modalities not included in the course. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  6. Oak Ridge Bio-surveillance Toolkit (ORBiT): Integrating Big-Data Analytics with Visual Analysis for Public Health Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ramanathan, Arvind [ORNL; Pullum, Laura L [ORNL; Steed, Chad A [ORNL; Chennubhotla, Chakra [University of Pittsburgh School of Medicine, Pittsburgh PA; Quinn, Shannon [University of Pittsburgh School of Medicine, Pittsburgh PA

    2013-01-01

    In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the data analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.

  7. A Digital Mixed Methods Research Design: Integrating Multimodal Analysis with Data Mining and Information Visualization for Big Data Analytics

    Science.gov (United States)

    O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew

    2018-01-01

    This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…

  8. Fuels planning: science synthesis and integration; social issues fact sheet 13: Strategies for managing fuels and visual quality

    Science.gov (United States)

    Christine Esposito

    2006-01-01

    The public's acceptance of forest management practices, including fuels reduction, is heavily based on how forests look. Fuels managers can improve their chances of success by considering aesthetics when making management decisions. This fact sheet reviews a three-part general strategy for managing fuels and visual quality: planning, implementation, and monitoring...

  9. Integrated analysis and visualization of group differences in structural and functional brain connectivity: Applications in typical ageing and schizophrenia

    NARCIS (Netherlands)

    C.D. Langen (Carolyn); T.J.H. White (Tonya); M.A. Ikram (Arfan); M.W. Vernooij (Meike); W.J. Niessen (Wiro)

    2015-01-01

    textabstractStructural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uniand bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of

  10. Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity : Applications in Typical Ageing and Schizophrenia

    NARCIS (Netherlands)

    Langen, C.D.; White, T.; Ikram, M.A.; Vernooij, M.W.; Niessen, W.J.

    2015-01-01

    Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of

  11. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  12. Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity: Applications in Typical Ageing and Schizophrenia.

    Directory of Open Access Journals (Sweden)

    Carolyn D Langen

    Full Text Available Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, "bi-modal comparison plots" show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using "worm plots". Group differences in connections are examined with an existing visualization, the "connectogram". These visualizations were evaluated in two proof-of-concept studies: (1 middle-aged versus elderly subjects; and (2 patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the "Statistical Analysis of Minimum cost path based Structural Connectivity" method and the average fractional anisotropy along the fiber. The functional measures were Pearson's correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only

  13. [Integrity].

    Science.gov (United States)

    Gómez Rodríguez, Rafael Ángel

    2014-01-01

    To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.

  14. Propuesta para la enseñanza del concepto de integral, un acercamiento visual con GeoGebra

    OpenAIRE

    López, Armando

    2010-01-01

    La enseñanza del concepto de integral en la educación media superior es considerada clave del curso Cálculo Integral, de la Reforma Integral del Bachillerato como lo señalan los programas de estudio del bachillerato tecnológico SEP (2008). Los nuevos programas demandan una metodología centrada en el aprendizaje, donde los conocimientos previos deben ser el preámbulo para adentrarse al estudio de conceptos clave, por lo que es importante proponer actividades para abordar estos conceptos enmarc...

  15. Public perceptions of west-side forests: improving visual impact assessments and designing thinnings and harvests for scenic integrity

    Science.gov (United States)

    Robert G. Ribe

    2013-01-01

    Perceptions of public forests’ acceptability can be infl uenced by aesthetic qualities, at both broad and project levels, aff ecting managers’ social license to act. Legal and methodological issues related to measuring and managing forest aesthetics in NEPA and NFMA decision-making are discussed. It is argued that conventional visual impact assessments—using...

  16. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    Science.gov (United States)

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  17. Defective imitation of finger configurations in patients with damage in the right or left hemispheres: An integration disorder of visual and somatosensory information?

    Science.gov (United States)

    Okita, Manabu; Yukihiro, Takashi; Miyamoto, Kenzo; Morioka, Shu; Kaba, Hideto

    2017-04-01

    To explore the mechanism underlying the imitation of finger gestures, we devised a simple imitation task in which the patients were instructed to replicate finger configurations in two conditions: one in which they could see their hand (visual feedback: VF) and one in which they could not see their hand (non-visual feedback: NVF). Patients with left brain damage (LBD) or right brain damage (RBD), respectively, were categorized into two groups based on their scores on the imitation task in the NVF condition: the impaired imitation groups (I-LBD and I-RBD) who failed two or more of the five patterns and the control groups (C-LBD and C-RBD) who made one or no errors. We also measured the movement-production times for imitation. The I-RBD group performed significantly worse than the C-RBD group even in the VF condition. In contrast, the I-LBD group was selectively impaired in the NVF condition. The I-LBD group performed the imitations at a significantly slower rate than the C-LBD group in both the VF and NVF conditions. These results suggest that impaired imitation in patients with LBD is partly due to an abnormal integration of visual and somatosensory information based on the task specificity of the NVF condition. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Reduced ventral cingulum integrity and increased behavioral problems in children with isolated optic nerve hypoplasia and mild to moderate or no visual impairment.

    Science.gov (United States)

    Webb, Emma A; O'Reilly, Michelle A; Clayden, Jonathan D; Seunarine, Kiran K; Dale, Naomi; Salt, Alison; Clark, Chris A; Dattani, Mehul T

    2013-01-01

    To assess the prevalence of behavioral problems in children with isolated optic nerve hypoplasia, mild to moderate or no visual impairment, and no developmental delay. To identify white matter abnormalities that may provide neural correlates for any behavioral abnormalities identified. Eleven children with isolated optic nerve hypoplasia (mean age 5.9 years) underwent behavioral assessment and brain diffusion tensor imaging, Twenty four controls with isolated short stature (mean age 6.4 years) underwent MRI, 11 of whom also completed behavioral assessments. Fractional anisotropy images were processed using tract-based spatial statistics. Partial correlation between ventral cingulum, corpus callosum and optic radiation fractional anisotropy, and child behavioral checklist scores (controlled for age at scan and sex) was performed. Children with optic nerve hypoplasia had significantly higher scores on the child behavioral checklist (pchildren with optic nerve hypoplasia. Right ventral cingulum fractional anisotropy correlated with total and externalising child behavioral checklist scores (r = -0.52, pchildren with optic nerve hypoplasia and mild to moderate or no visual impairment require behavioral assessment to determine the presence of clinically significant behavioral problems. Reduced structural integrity of the ventral cingulum correlated with behavioral scores, suggesting that these white matter abnormalities may be clinically significant. The presence of reduced fractional anisotropy in the optic radiations of children with mild to moderate or no visual impairment raises questions as to the pathogenesis of these changes which will need to be addressed by future studies.

  19. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability

    Energy Technology Data Exchange (ETDEWEB)

    Rengier, Fabian, E-mail: fabian.rengier@web.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Häfner, Matthias F. [University Hospital Heidelberg, Department of Radiation Oncology, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Unterhinninghofen, Roland [Karlsruhe Institute of Technology (KIT), Institute for Anthropomatics, Department of Informatics, Adenauerring 2, 76131 Karlsruhe (Germany); Nawrotzki, Ralph; Kirsch, Joachim [University of Heidelberg, Institute of Anatomy and Cell Biology, Im Neuenheimer Feld 307, 69120 Heidelberg (Germany); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Giesel, Frederik L. [University of Heidelberg, Institute of Anatomy and Cell Biology, Im Neuenheimer Feld 307, 69120 Heidelberg (Germany); University Hospital Heidelberg, Department of Nuclear Medicine, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany)

    2013-08-15

    Purpose: Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students’ deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. Materials and methods: A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. Results: The total number of correctly answered questions improved from 36.9 ± 4.8 to 49.5 ± 5.4 (p < 0.001) which corresponded to a mean improvement of 12.6 (95% confidence interval 9.9–15.3) or 19.8%. Radiological knowledge improved by 36.0% (p < 0.001), diagnostic skills for cross-sectional imaging by 38.7% (p < 0.001), diagnostic skills for other imaging modalities – which were not included in the course – by 14.0% (p = 0.001), and visual-spatial ability by 11.3% (p < 0.001). Conclusion: The integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby

  20. Integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves diagnostic skills and visual-spatial ability

    International Nuclear Information System (INIS)

    Rengier, Fabian; Häfner, Matthias F.; Unterhinninghofen, Roland; Nawrotzki, Ralph; Kirsch, Joachim; Kauczor, Hans-Ulrich; Giesel, Frederik L.

    2013-01-01

    Purpose: Integrating interactive three-dimensional post-processing software into undergraduate radiology teaching might be a promising approach to synergistically improve both visual-spatial ability and radiological skills, thereby reducing students’ deficiencies in image interpretation. The purpose of this study was to test our hypothesis that a hands-on radiology course for medical students using interactive three-dimensional image post-processing software improves radiological knowledge, diagnostic skills and visual-spatial ability. Materials and methods: A hands-on radiology course was developed using interactive three-dimensional image post-processing software. The course consisted of seven seminars held on a weekly basis. The 25 participating fourth- and fifth-year medical students learnt to systematically analyse cross-sectional imaging data and correlated the two-dimensional images with three-dimensional reconstructions. They were instructed by experienced radiologists and collegiate tutors. The improvement in radiological knowledge, diagnostic skills and visual-spatial ability was assessed immediately before and after the course by multiple-choice tests comprising 64 questions each. Wilcoxon signed rank test for paired samples was applied. Results: The total number of correctly answered questions improved from 36.9 ± 4.8 to 49.5 ± 5.4 (p < 0.001) which corresponded to a mean improvement of 12.6 (95% confidence interval 9.9–15.3) or 19.8%. Radiological knowledge improved by 36.0% (p < 0.001), diagnostic skills for cross-sectional imaging by 38.7% (p < 0.001), diagnostic skills for other imaging modalities – which were not included in the course – by 14.0% (p = 0.001), and visual-spatial ability by 11.3% (p < 0.001). Conclusion: The integration of interactive three-dimensional image post-processing software into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills and visual-spatial ability, and thereby

  1. IIS--Integrated Interactome System: a web-based platform for the annotation, analysis and visualization of protein-metabolite-gene-drug interactions by integrating a variety of data sources and tools.

    Science.gov (United States)

    Carazzolle, Marcelo Falsarella; de Carvalho, Lucas Miguel; Slepicka, Hugo Henrique; Vidal, Ramon Oliveira; Pereira, Gonçalo Amarante Guimarães; Kobarg, Jörg; Meirelles, Gabriela Vaz

    2014-01-01

    High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two

  2. Three-dimensional visualization of functional brain tissue and functional magnetic resonance imaging-integrated neuronavigation in the resection of brain tumor adjacent to motor cortex

    International Nuclear Information System (INIS)

    Han Tong; Cui Shimin; Tong Xiaoguang; Liu Li; Xue Kai; Liu Meili; Liang Siquan; Zhang Yunting; Zhi Dashi

    2011-01-01

    Objective: To assess the value of three -dimensional visualization of functional brain tissue and the functional magnetic resonance imaging (fMRI)-integrated neuronavigation in the resection of brain tumor adjacent to motor cortex. Method: Sixty patients with tumor located in the central sulcus were enrolled. Thirty patients were randomly assigned to function group and 30 to control group. Patients in function group underwent fMRI to localize the functional brain tissues. Then the function information was transferred to the neurosurgical navigator. The patients in control group underwent surgery with navigation without function information. The therapeutic effect, excision rate. improvement of motor function, and survival quality during follow-up were analyzed. Result: All patients in function group were accomplished visualization of functional brain tissues and fMRI-integrated neuronavigation. The locations of tumors, central sulcus and motor cortex were marked during the operation. The fMRI -integrated information played a great role in both pre- and post-operation. Pre-operation: designing the location of the skin flap and window bone, determining the relationship between the tumor and motor cortex, and designing the pathway for the resection. Post- operation: real-time navigation of relationship between the tumor and motor cortex, assisting to localize the motor cortex using interoperation ultra-sound for correcting the displacement by the CSF outflow and collapsing tumor. The patients in the function group had better results than the patients in the control group in therapeutic effect (u=2.646, P=0.008), excision rate (χ = 7.200, P<0.01), improvement of motor function (u=2.231, P=0.026), and survival quality (KPS u c = 2.664, P=0.008; Zubrod -ECOG -WHO u c =2.135, P=0.033). Conclusions: Using preoperative three -dimensional visualization of cerebral function tissue and the fMRI-integrated neuronavigation technology, combining intraoperative accurate

  3. Integral data test of HENDL1.0/MG and visualBUS with neutronics shielding experiments. Pt.1

    International Nuclear Information System (INIS)

    Gao Chunjing; Deng Tieru; Xu Dezheng; Li Jingjing; Wu Yican

    2004-01-01

    HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated Nuclear Data Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics, Chinese Academy of Sciences) on the basis of several national data libraries. To validate and qualify the process of producing HENDL1.0/MG, simulating calculations of a series of existent spherical shell benchmark experiments (Al, Mo, Co, Ti, Mn, W, Be and V) have been performed with HENDL1.0/MG and the multifunctional neutronics code system named VisualBUS home-developed also by FDS Team. (authors)

  4. INTEGRATION ASPECTS OF THE LANGUAGE OF THE MAP IN THE VISUALIZATION OF INFORMATION IN THE INTERNET ERA

    Directory of Open Access Journals (Sweden)

    A. K. Suvorov

    2014-01-01

    Full Text Available Development of new principles of the language maps associated with the use of the Internet, computers and mobile devices. It is shown that the mapping in the modern society with the use of the Internet is based on ready-made visual images of reality, realization of creative opportunities of people by manipulating these images, posting on the Internet of personal information, implementation of project, mapping and other works on the remote services using Web connection. Describes the developed by the author hermeneutic principles of mapping.

  5. ARTEFACTOS DIALÓGICOS: UNA PROPUESTA PARA INTEGRAR LA EDUCACIÓN DE ARTES MUSICALES Y VISUALES (DIALOGIC ARTIFACTS: A PROPOSAL TO INTEGRATE THE EDUCATION OF MUSICAL AND VISUAL ARTS

    Directory of Open Access Journals (Sweden)

    Arenas Navarrete Mario

    2011-08-01

    Full Text Available Resumen:El propósito de este ensayo es realizar una propuesta para crear unidades de integración de artes musicales y visuales a través de la participación de estudiantes desde 12 a 17 años de edad, aproximadamente, en la creación de “Artefactos Dialógicos”, es decir, esculturas sonoras cinéticas e interactivas. La particularidad de estas instalaciones-esculturas es que establecen y explicitan diversos tipos de diálogos con la naturaleza. Corresponden a la cristalización de un proceso iniciado el año 1986 en el Departamento de Música de la Universidad de La Serena, Chile, caracterizado por la defensa de la transversalidad disciplinar, en oposición al especialismo. Han participado estudiantes universitarios y niños de su Escuela Experimental de Música; profesores, artistas visuales, compositores e investigadores. La pretensión de que estudiantes construyan estos artefactos, conlleva cumplir como requisito, su empoderamiento, el desarrollo de su capacidad de agencia y creatividad, para que, en colaboración con profesores de diferentes asignaturas artísticas, científicas y humanísticas, incluyan en la mirada estética, la configuración material y estructural de estos aparatos, integrando, así, racionalismo y expresividad. Todo ello, visualizado a través del filtro epistémico que otorga la educación intercultural, de tal modo de atrapar y proyectar ancestros, gestos, modos, iconografías,2 idiolectos,3 identidades y patrimonio.Abstract: In this essay, we propose to create units of integration of the musical and visual arts through the participation of students ranging approximately from 12 to 17 years of age, for the creation of the "Dialogical Artifacts", i.e., kinetic and interactive sound sculptures. The particularity of these artifacts lies on the fact that they establish and explicit different types of dialogues with nature from a transversal perspective of the curriculum. This initiative was taken for the first

  6. Integrating visual dietary documentation in mobile-phone-based self-management application for adolescents with type 1 diabetes.

    Science.gov (United States)

    Frøisland, Dag Helge; Årsand, Eirik

    2015-05-01

    The goal of modern diabetes treatment is to a large extent focused on self-management to achieve and maintain a healthy, low HbA1c. Despite all new technical diabetes tools and support, including advanced blood glucose meters and insulin delivery systems, diabetes patients still struggle to achieve international treatment goals, that is, HbA1c mobile-phone-based tool to capture and visualize adolescents' food intake. Our aim was to affect understanding of carbohydrate counting and also to facilitate doctor-adolescent communication with regard to daily treatment. Furthermore, we wanted to evaluate the effect of the designed tool with regard to empowerment, self-efficacy, and self-treatment. The study concludes that implementing a visualization tool is an important contribution for young people to understand the basics of diabetes and to empower young people to define their treatment challenges. By capturing a picture of their own food, the person's own feeling of being in charge can be affected and better self-treatment achieved. © 2015 Diabetes Technology Society.

  7. The EOP Visualization Module Integrated into the Plasma On-Line Nuclear Power Plant Safety Monitoring and Assessment System

    International Nuclear Information System (INIS)

    Hornaes, Arne; Hulsund, John Einar; Vegh, Janos; Major, Csaba; Horvath, Csaba; Lipcsei, Sandor; Kapocs, Gyoergy

    2001-01-01

    An ambitious project to replace the unit information systems (UISs) at the Hungarian Paks nuclear power plant was started in 1998-99. The basic aim of the reconstruction project is to install a modern, distributed UIS architecture on all four Paks VVER-440 units. The new UIS includes an on-line plant safety monitoring and assessment system (PLASMA), which contains a critical safety functions monitoring module and provides extensive operator support during the execution of the new, symptom-oriented emergency operating procedures (EOPs). PLASMA includes a comprehensive EOP visualization module, based on the COPMA-III procedure-handling software developed by the Organization for Economic Cooperation and Development, Halden Reactor Project. Intranet technology is applied for the presentation of the EOPs with the use of a standard hypertext markup language (HTML) browser as a visualization tool. The basic design characteristics of the system, with a detailed description of its user interface and functions of the new EOP display module, are presented

  8. Gender differences in the processing of standard emotional visual stimuli: integrating ERP and fMRI results

    Science.gov (United States)

    Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin

    2005-04-01

    The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.

  9. VISUAL-SEVEIF, a tool for integrating fire behavior simulation and economic evaluation of the impact of Wildfires

    Science.gov (United States)

    Francisco Rodríguez y Silva; Juan Ramón Molina Martínez; Miguel Ángel Herrera Machuca; Jesús Mª Rodríguez Leal

    2013-01-01

    Progress made in recent years in fire science, particularly as applied to forest fire protection, coupled with the increased power offered by mathematical processors integrated into computers, has led to important developments in the field of dynamic and static simulation of forest fires. Furthermore, and similarly, econometric models applied to economic...

  10. VISUALIZATION FROM INTRAOPERATIVE SWEPT-SOURCE MICROSCOPE-INTEGRATED OPTICAL COHERENCE TOMOGRAPHY IN VITRECTOMY FOR COMPLICATIONS OF PROLIFERATIVE DIABETIC RETINOPATHY.

    Science.gov (United States)

    Gabr, Hesham; Chen, Xi; Zevallos-Carrasco, Oscar M; Viehland, Christian; Dandrige, Alexandria; Sarin, Neeru; Mahmoud, Tamer H; Vajzovic, Lejla; Izatt, Joseph A; Toth, Cynthia A

    2018-01-10

    To evaluate the use of live volumetric (4D) intraoperative swept-source microscope-integrated optical coherence tomography in vitrectomy for proliferative diabetic retinopathy complications. In this prospective study, we analyzed a subgroup of patients with proliferative diabetic retinopathy complications who required vitrectomy and who were imaged by the research swept-source microscope-integrated optical coherence tomography system. In near real time, images were displayed in stereo heads-up display facilitating intraoperative surgeon feedback. Postoperative review included scoring image quality, identifying different diabetic retinopathy-associated pathologies and reviewing the intraoperatively documented surgeon feedback. Twenty eyes were included. Indications for vitrectomy were tractional retinal detachment (16 eyes), combined tractional-rhegmatogenous retinal detachment (2 eyes), and vitreous hemorrhage (2 eyes). Useful, good-quality 2D (B-scans) and 4D images were obtained in 16/20 eyes (80%). In these eyes, multiple diabetic retinopathy complications could be imaged. Swept-source microscope-integrated optical coherence tomography provided surgical guidance, e.g., in identifying dissection planes under fibrovascular membranes, and in determining residual membranes and traction that would benefit from additional peeling. In 4/20 eyes (20%), acceptable images were captured, but they were not useful due to high tractional retinal detachment elevation which was challenging for imaging. Swept-source microscope-integrated optical coherence tomography can provide important guidance during surgery for proliferative diabetic retinopathy complications through intraoperative identification of different complications and facilitation of intraoperative decision making.

  11. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    Science.gov (United States)

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  12. A wireless partially glaciated watershed in a virtual globe: Integrating data, models, and visualization to increase climate change understanding

    Science.gov (United States)

    Jones, J.; Hood, E.; Fatland, D. R.; Berner, L.; Heavner, M.; Connor, C.; O'Brien, W.

    2008-12-01

    SEAMONSTER, a NASA funded sensor web project, is the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education and Research. SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The diverse sensors (meteorological, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. The sensors are deployed throughout two partially glaciated watersheds and facilitated data acquisition in temperate rain forest, alpine, lacustrine, and glacial environments. Understanding these environments is important for public understanding of climate change. These environments are geographically isolated, limiting public access to, and understanding of, such locales. In an effort to inform the general public and primary educators about the basic processes occurring in these unique natural systems, we have developed an interactive website. This web portal supplements and enhances environmental science primary education by providing educators and students with interactive access to basic information from the glaciological, hydrological, and meteorological systems we are studying. In addition, we have developed an interactive virtual tour of the Lemon Glacier and its watershed. The focus of this presentation is using the data gathered by the SEAMONSTER sensor web, coupled with a temperature-indexed glacial melt model, to educate students and the public on topics ranging from modeling responses due to environmental changes to glacial hydrology. The interactive SEAMONSTER web site is the primary source for visualizing the data, while Google Earth can be used to visualize the isolated Lemon Creek watershed

  13. System visualization of integrated biofuels and high value chemicals developed within the MacroAlgaeBiorefinery (MAB3) project

    DEFF Research Database (Denmark)

    Seghetta, Michele; Hasler, Berit; Bastianoni, Simone

    MacroAlgaeBiorefinery (MAB3) may functions as production platform and raw material supplier for future sustainable production chains of biofuels and high value chemicals. Biofuels are interesting energy source but challenges in terms of the composition of the biomass and resulting energy...... efficiencies has to be compensated for to make the biofuel prices competitive in replacing fossil fuel. Since it is difficult to increase the yield of the single biorefinery, the overall system productivity can be improved integrating different sub-systems. In this study, macroalgae cultivation in Denmark...... is integrated with a biogas biorefinery, a bioethanol biorefinery and a fish feed industry. The modeled system is able to adapt itself to different amount and quality of feedstock and to maximize valuable outputs (e.g. bio-fuels and chemical). Macroalgae are harvested and utilized as feedstock in bioethanol...

  14. Interactive balance training integrating sensor-based visual feedback of movement performance: a pilot study in older adults

    OpenAIRE

    Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan

    2014-01-01

    Background Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Methods Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n?=?17) or contro...

  15. Reduced ventral cingulum integrity and increased behavioral problems in children with isolated optic nerve hypoplasia and mild to moderate or no visual impairment.

    Directory of Open Access Journals (Sweden)

    Emma A Webb

    Full Text Available OBJECTIVES: To assess the prevalence of behavioral problems in children with isolated optic nerve hypoplasia, mild to moderate or no visual impairment, and no developmental delay. To identify white matter abnormalities that may provide neural correlates for any behavioral abnormalities identified. PATIENTS AND METHODS: Eleven children with isolated optic nerve hypoplasia (mean age 5.9 years underwent behavioral assessment and brain diffusion tensor imaging, Twenty four controls with isolated short stature (mean age 6.4 years underwent MRI, 11 of whom also completed behavioral assessments. Fractional anisotropy images were processed using tract-based spatial statistics. Partial correlation between ventral cingulum, corpus callosum and optic radiation fractional anisotropy, and child behavioral checklist scores (controlled for age at scan and sex was performed. RESULTS: Children with optic nerve hypoplasia had significantly higher scores on the child behavioral checklist (p<0.05 than controls (4 had scores in the clinically significant range. Ventral cingulum, corpus callosum and optic radiation fractional anisotropy were significantly reduced in children with optic nerve hypoplasia. Right ventral cingulum fractional anisotropy correlated with total and externalising child behavioral checklist scores (r = -0.52, p<0.02, r = -0.46, p<0.049 respectively. There were no significant correlations between left ventral cingulum, corpus callosum or optic radiation fractional anisotropy and behavioral scores. CONCLUSIONS: Our findings suggest that children with optic nerve hypoplasia and mild to moderate or no visual impairment require behavioral assessment to determine the presence of clinically significant behavioral problems. Reduced structural integrity of the ventral cingulum correlated with behavioral scores, suggesting that these white matter abnormalities may be clinically significant. The presence of reduced fractional anisotropy in the optic

  16. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  17. Proposal of the visual inspection of the integrity of the storage cells of spent fuel from the nuclear power plant of Laguna Verde

    International Nuclear Information System (INIS)

    Gonzalez M, J. L.; Rivero G, T.; Merino C, F. J.; Santander C, L. E.

    2015-09-01

    As part of the evaluation of the structural integrity of the components of nuclear plants, particularly those applying for life extension is necessary to carry out inspections and nondestructive testing to determine the state meet. In many cases these activities are carried out in areas with high levels of radiation and contamination difficult to access, so that are required to use equipment or robotic systems operated remotely. Among others, the frames and cells of the storage pools for spent fuel are structures subject to a program of tests and inspections, and become relevant because the nuclear power plant of Laguna Verde (NPP-LV) is processing the license to extend the operational life of its reactors. Of non-destructive testing can be used to verify the physical condition of the frames and storage cells, is the remote visual inspection which is a test that allows determine the physical integrity of the components by one or more video cameras designed to applications in underwater environments with radiation, and are used to identify and locate adverse conditions such as ampoules, protuberances, pitting, cracks, stains or buckling, which could affect the three main functions for which the store components are designed: to maintain the physical integrity of spent fuels, store them properly guaranteeing their free insertion and removal, and ensure that the store as a whole meets the criticality criteria that k eff is less than 0.95 throughout the life of the plant. This paper describes a proposal to carry out the visual inspection of the storage cells of spent fuel from the NPP-LV using a probe including one or more video cameras along with your recorder, and its corresponding control program. It is noted that due to the obtained results, the nuclear power plant personnel can make decisions regarding remedial actions or applying complementary methods to verify that the cells and frames have not lost their physical integrity, or in particular that the cover

  18. Towards an integrative model of visual short-term memory maintenance: Evidence from the effects of attentional control, load, decay, and their interactions in childhood.

    Science.gov (United States)

    Shimi, Andria; Scerif, Gaia

    2017-12-01

    Over the past decades there has been a surge of research aiming to shed light on the nature of capacity limits to visual short-term memory (VSTM). However, an integrative account of this evidence is currently missing. We argue that investigating parameters constraining VSTM in childhood suggests a novel integrative model of VSTM maintenance, and that this in turn informs mechanisms of VSTM maintenance in adulthood. Over 3 experiments with 7-year-olds and young adults (total N=206), we provide evidence for multiple cognitive processes interacting to constrain VSTM performance. While age-related increases in storage capacity are undisputable, we replicate the finding that attentional processes control what information will be encoded and maintained in VSTM in the face of increased competition. Therefore, a central process to the current model is attentional refreshment, a mechanism that it is thought to reactivate and strengthen the signal of the visual representations. Critically, here we also show that attentional influences on VSTM are further constrained by additional factors, traditionally studied to the exclusion of each other, such as memory load and temporal decay. We propose that these processes work synergistically in an elegant manner to capture the adult-end state, whereas their less refined efficiency and modulations in childhood account for the smaller VSTM capacity that 7-year-olds demonstrate compared to older individuals. We conclude that going beyond the investigation of single cognitive mechanisms, to their interactions, holds the promise to understand both developing and fully developed maintenance in VSTM. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. ngs.plot: Quick mining and visualization of next-generation sequencing data by integrating genomic databases.

    Science.gov (United States)

    Shen, Li; Shao, Ningyi; Liu, Xiaochuan; Nestler, Eric

    2014-04-15

    Understanding the relationship between the millions of functional DNA elements and their protein regulators, and how they work in conjunction to manifest diverse phenotypes, is key to advancing our understanding of the mammalian genome. Next-generation sequencing technology is now used widely to probe these protein-DNA interactions and to profile gene expression at a genome-wide scale. As the cost of DNA sequencing continues to fall, the interpretation of the ever increasing amount of data generated represents a considerable challenge. We have developed ngs.plot - a standalone program to visualize enrichment patterns of DNA-interacting proteins at functionally important regions based on next-generation sequencing data. We demonstrate that ngs.plot is not only efficient but also scalable. We use a few examples to demonstrate that ngs.plot is easy to use and yet very powerful to generate figures that are publication ready. We conclude that ngs.plot is a useful tool to help fill the gap between massive datasets and genomic information in this era of big sequencing data.

  20. B-CAN: a resource sharing platform to improve the operation, visualization and integrated analysis of TCGA breast cancer data.

    Science.gov (United States)

    Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu

    2017-12-12

    Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA's complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform.

  1. PopHR: a knowledge-based platform to support integration, analysis, and visualization of population health data.

    Science.gov (United States)

    Shaban-Nejad, Arash; Lavigne, Maxime; Okhmatovskaia, Anya; Buckeridge, David L

    2017-01-01

    Population health decision makers must consider complex relationships between multiple concepts measured with differential accuracy from heterogeneous data sources. Population health information systems are currently limited in their ability to integrate data and present a coherent portrait of population health. Consequentially, these systems can provide only basic support for decision makers. The Population Health Record (PopHR) is a semantic web application that automates the integration and extraction of massive amounts of heterogeneous data from multiple distributed sources (e.g., administrative data, clinical records, and survey responses) to support the measurement and monitoring of population health and health system performance for a defined population. The design of the PopHR draws on the theories of the determinants of health and evidence-based public health to harmonize and explicitly link information about a population with evidence about the epidemiology and control of chronic diseases. Organizing information in this manner and linking it explicitly to evidence is expected to improve decision making related to the planning, implementation, and evaluation of population health and health system interventions. In this paper, we describe the PopHR platform and discuss the architecture, design, key modules, and its implementation and use. © 2016 New York Academy of Sciences.

  2. WebGimm: An integrated web-based platform for cluster analysis, functional analysis, and interactive visualization of results.

    Science.gov (United States)

    Joshi, Vineet K; Freudenberg, Johannes M; Hu, Zhen; Medvedovic, Mario

    2011-01-17

    Cluster analysis methods have been extensively researched, but the adoption of new methods is often hindered by technical barriers in their implementation and use. WebGimm is a free cluster analysis web-service, and an open source general purpose clustering web-server infrastructure designed to facilitate easy deployment of integrated cluster analysis servers based on clustering and functional annotation algorithms implemented in R. Integrated functional analyses and interactive browsing of both, clustering structure and functional annotations provides a complete analytical environment for cluster analysis and interpretation of results. The Java Web Start client-based interface is modeled after the familiar cluster/treeview packages making its use intuitive to a wide array of biomedical researchers. For biomedical researchers, WebGimm provides an avenue to access state of the art clustering procedures. For Bioinformatics methods developers, WebGimm offers a convenient avenue to deploy their newly developed clustering methods. WebGimm server, software and manuals can be freely accessed at http://ClusterAnalysis.org/.

  3. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    Science.gov (United States)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and

  4. Ascorbic acid surface modified TiO₂-thin layers as a fully integrated analysis system for visual simultaneous detection of organophosphorus pesticides.

    Science.gov (United States)

    Li, Shunxing; Liang, Wenjie; Zheng, Fengying; Lin, Xiaofeng; Cai, Jiabai

    2014-11-06

    TiO₂ photocatalysis and colorimetric detection are coupled with thin layer chromatography (TLC) for the first time to develop a fully integrated analysis system. Titania@polystyrene hybrid microspheres were surface modified with ascorbic acid, denoted AA-TiO₂@PS, and used as the stationary phase for TLC. Because the affinity between AA-TiO₂@PS and organophosphorus pesticides (OPs) was different for different species of OPs (including chlopyrifos, malathion, parathion, parathion-methyl, and methamidophos), OPs could be separated simultaneously by the mobile phase in 12.0 min with different Rf values. After surface modification, the UV-vis wavelength response range of AA-TiO₂@PS was expanded to 650 nm. Under visible-light irradiation, all of the OPs could be photodegraded to PO₄(3-) in 25.0 min. Based on the chromogenic reaction between PO₄(3-) and chromogenic agents (ammonium molybdate and ascorbic acid), OPs were quantified from color intensity images using a scanner in conjunction with image processing software. So, AA-TiO₂@PS was respectively used as the stationary phase of TLC for efficient separation of OPs, as a photocatalyst for species transformation of phosphorus, and as a colorimetric probe for on-field simultaneous visual detection of OPs in natural water. Linear calibration curves for each OP ranged from 19.3 nmol P L(-1) to 2.30 μmol P L(-1). This integrated analysis system was simple, inexpensive, easy to operate, and sensitive.

  5. Interactions of visual attention and quality perception

    NARCIS (Netherlands)

    Redi, J.A.; Liu, H.; Zunino, R.; Heynderickx, I.E.J.R.

    2011-01-01

    Several attempts to integrate visual saliency information in quality metrics are described in literature, albeit with contradictory results. The way saliency is integrated in quality metrics should reflect the mechanisms underlying the interaction between image quality assessment and visual

  6. The VIPER project (Visualization Integration Platform for Exploration Research): a biologically inspired autonomous reconfigurable robotic platform for diverse unstructured environments

    Science.gov (United States)

    Schubert, Oliver J.; Tolle, Charles R.

    2004-09-01

    highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.

  7. 'Integration'

    DEFF Research Database (Denmark)

    Olwig, Karen Fog

    2011-01-01

    , while the countries have adopted disparate policies and ideologies, differences in the actual treatment and attitudes towards immigrants and refugees in everyday life are less clear, due to parallel integration programmes based on strong similarities in the welfare systems and in cultural notions...... of equality in the three societies. Finally, it shows that family relations play a central role in immigrants’ and refugees’ establishment of a new life in the receiving societies, even though the welfare society takes on many of the social and economic functions of the family....

  8. Visual functions and disability in diabetic retinopathy patients

    Directory of Open Access Journals (Sweden)

    Gauri Shankar Shrestha

    2014-01-01

    Conclusion: Impairment of near visual acuity, contrast sensitivity, and peripheral visual field correlated significantly with different types of visual disability. Hence, these clinical tests should be an integral part of the visual assessment of diabetic eyes.

  9. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions.

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2016-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional

  10. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2017-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional

  11. Ascorbic acid surface modified TiO2-thin layers as a fully integrated analysis system for visual simultaneous detection of organophosphorus pesticides

    Science.gov (United States)

    Li, Shunxing; Liang, Wenjie; Zheng, Fengying; Lin, Xiaofeng; Cai, Jiabai

    2014-11-01

    TiO2 photocatalysis and colorimetric detection are coupled with thin layer chromatography (TLC) for the first time to develop a fully integrated analysis system. Titania@polystyrene hybrid microspheres were surface modified with ascorbic acid, denoted AA-TiO2@PS, and used as the stationary phase for TLC. Because the affinity between AA-TiO2@PS and organophosphorus pesticides (OPs) was different for different species of OPs (including chlopyrifos, malathion, parathion, parathion-methyl, and methamidophos), OPs could be separated simultaneously by the mobile phase in 12.0 min with different Rf values. After surface modification, the UV-vis wavelength response range of AA-TiO2@PS was expanded to 650 nm. Under visible-light irradiation, all of the OPs could be photodegraded to PO43- in 25.0 min. Based on the chromogenic reaction between PO43- and chromogenic agents (ammonium molybdate and ascorbic acid), OPs were quantified from color intensity images using a scanner in conjunction with image processing software. So, AA-TiO2@PS was respectively used as the stationary phase of TLC for efficient separation of OPs, as a photocatalyst for species transformation of phosphorus, and as a colorimetric probe for on-field simultaneous visual detection of OPs in natural water. Linear calibration curves for each OP ranged from 19.3 nmol P L-1 to 2.30 μmol P L-1. This integrated analysis system was simple, inexpensive, easy to operate, and sensitive.TiO2 photocatalysis and colorimetric detection are coupled with thin layer chromatography (TLC) for the first time to develop a fully integrated analysis system. Titania@polystyrene hybrid microspheres were surface modified with ascorbic acid, denoted AA-TiO2@PS, and used as the stationary phase for TLC. Because the affinity between AA-TiO2@PS and organophosphorus pesticides (OPs) was different for different species of OPs (including chlopyrifos, malathion, parathion, parathion-methyl, and methamidophos), OPs could be separated

  12. The utility of quantitative electroencephalography and Integrated Visual and Auditory Continuous Performance Test as auxiliary tools for the Attention Deficit Hyperactivity Disorder diagnosis.

    Science.gov (United States)

    Kim, JunWon; Lee, YoungSik; Han, DougHyun; Min, KyungJoon; Kim, DoHyun; Lee, ChangWon

    2015-03-01

    This study investigated the clinical utility of quantitative electroencephalography (QEEG) and the Integrated Visual and Auditory Continuous Performance Test (IVA+CPT) as auxiliary tools for assessing Attention Deficit Hyperactivity Disorder (ADHD). All of 157 subjects were assessed using the Korean version of the Diagnostic Interview Schedule for Children Version IV (DISC-IV). We measured EGG absolute power in 21 channels and conducted IVA+CPT. We analyzed QEEG according to the Hz range: delta (1-4Hz), theta (4-8Hz), slow alpha (8-10Hz), fast alpha (10-13.5Hz), and beta (13.5-30Hz). To remove artifacts, independent component analysis was conducted (ICA), and the tester confirmed the results again. All of the IVA+CPT quotients showed significant differences between the ADHD and control groups. The ADHD group showed significantly increased delta and theta activity compared with the control group. The z-scores of theta were negatively correlated with the scores of IVA+CPT in ADHD combined type, and those of beta were positively correlated. IVA+CPT and QEEG significantly discriminated between ADHD and control groups. The commission error of IVA+CPT showed an accuracy of 82.1%, and the omission error of IVA+CPT showed an accuracy of 78.6%. The IVA+CPT and QEEG are expected to be valuable tools for aiding ADHD diagnosis accurately. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  14. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    Science.gov (United States)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map

  15. CpGAVAS, an integrated web server for the annotation, visualization, analysis, and GenBank submission of completely sequenced chloroplast genome sequences

    Science.gov (United States)

    2012-01-01

    Background The complete sequences of chloroplast genomes provide wealthy information regarding the evolutionary history of species. With the advance of next-generation sequencing technology, the number of completely sequenced chloroplast genomes is expected to increase exponentially, powerful computational tools annotating the genome sequences are in urgent need. Results We have developed a web server CPGAVAS. The server accepts a complete chloroplast genome sequence as input. First, it predicts protein-coding and rRNA genes based on the identification and mapping of the most similar, full-length protein, cDNA and rRNA sequences by integrating results from Blastx, Blastn, protein2genome and est2genome programs. Second, tRNA genes and inverted repeats (IR) are identified using tRNAscan, ARAGORN and vmatch respectively. Third, it calculates the summary statistics for the annotated genome. Fourth, it generates a circular map ready for publication. Fifth, it can create a Sequin file for GenBank submission. Last, it allows the extractions of protein and mRNA sequences for given list of genes and species. The annotation results in GFF3 format can be edited using any compatible annotation editing tools. The edited annotations can then be uploaded to CPGAVAS for update and re-analyses repeatedly. Using known chloroplast genome sequences as test set, we show that CPGAVAS performs comparably to another application DOGMA, while having several superior functionalities. Conclusions CPGAVAS allows the semi-automatic and complete annotation of a chloroplast genome sequence, and the visualization, editing and analysis of the annotation results. It will become an indispensible tool for researchers studying chloroplast genomes. The software is freely accessible from http://www.herbalgenomics.org/cpgavas. PMID:23256920

  16. CpGAVAS, an integrated web server for the annotation, visualization, analysis, and GenBank submission of completely sequenced chloroplast genome sequences

    Directory of Open Access Journals (Sweden)

    Liu Chang

    2012-12-01

    Full Text Available Abstract Background The complete sequences of chloroplast genomes provide wealthy information regarding the evolutionary history of species. With the advance of next-generation sequencing technology, the number of completely sequenced chloroplast genomes is expected to increase exponentially, powerful computational tools annotating the genome sequences are in urgent need. Results We have developed a web server CPGAVAS. The server accepts a complete chloroplast genome sequence as input. First, it predicts protein-coding and rRNA genes based on the identification and mapping of the most similar, full-length protein, cDNA and rRNA sequences by integrating results from Blastx, Blastn, protein2genome and est2genome programs. Second, tRNA genes and inverted repeats (IR are identified using tRNAscan, ARAGORN and vmatch respectively. Third, it calculates the summary statistics for the annotated genome. Fourth, it generates a circular map ready for publication. Fifth, it can create a Sequin file for GenBank submission. Last, it allows the extractions of protein and mRNA sequences for given list of genes and species. The annotation results in GFF3 format can be edited using any compatible annotation editing tools. The edited annotations can then be uploaded to CPGAVAS for update and re-analyses repeatedly. Using known chloroplast genome sequences as test set, we show that CPGAVAS performs comparably to another application DOGMA, while having several superior functionalities. Conclusions CPGAVAS allows the semi-automatic and complete annotation of a chloroplast genome sequence, and the visualization, editing and analysis of the annotation results. It will become an indispensible tool for researchers studying chloroplast genomes. The software is freely accessible from http://www.herbalgenomics.org/cpgavas.

  17. Expression Profiling of Human Pluripotent Stem Cell-Derived Cardiomyocytes Exposed to Doxorubicin-Integration and Visualization of Multi-Omics Data.

    Science.gov (United States)

    Holmgren, Gustav; Sartipy, Peter; Andersson, Christian X; Lindahl, Anders; Synnergren, Jane

    2018-05-01

    Anthracyclines, such as doxorubicin, are highly efficient chemotherapeutic agents against a variety of cancers. However, anthracyclines are also among the most cardiotoxic therapeutic drugs presently on the market. Chemotherapeutic-induced cardiomyopathy is one of the leading causes of disease and mortality in cancer survivors. The exact mechanisms responsible for doxorubicin-induced cardiomyopathy are not completely known, but the fact that the cardiotoxicity is dose-dependent and that there is a variation in time-to-onset of toxicity, and gender- and age differences suggests that several mechanisms may be involved. In this study, we investigated doxorubicin-induced cardiotoxicity in human pluripotent stem cell-derived cardiomyocytes using proteomics. In addition, different sources of omics data (protein, mRNA, and microRNA) from the same experimental setup were further combined and analyzed using newly developed methods to identify differential expression in data of various origin and types. Subsequently, the results were integrated in order to generate a combined visualization of the findings. In our experimental model system, we exposed cardiomyocytes derived from human pluripotent stem cells to doxorubicin for up to 2 days, followed by a wash-out period of additionally 12 days. Besides an effect on the cell morphology and cardiomyocyte functionality, the data show a strong effect of doxorubicin on all molecular levels investigated. Differential expression patterns that show a linkage between the proteome, transcriptome, and the regulatory microRNA network, were identified. These findings help to increase the understanding of the mechanisms behind anthracycline-induced cardiotoxicity and suggest putative biomarkers for this condition.

  18. Math for visualization, visualizing math

    NARCIS (Netherlands)

    Wijk, van J.J.; Hart, G.; Sarhangi, R.

    2013-01-01

    I present an overview of our work in visualization, and reflect on the role of mathematics therein. First, mathematics can be used as a tool to produce visualizations, which is illustrated with examples from information visualization, flow visualization, and cartography. Second, mathematics itself

  19. Visual art and visual perception

    NARCIS (Netherlands)

    Koenderink, Jan J.

    2015-01-01

    Visual art and visual perception ‘Visual art’ has become a minor cul-de-sac orthogonal to THE ART of the museum directors and billionaire collectors. THE ART is conceptual, instead of visual. Among its cherished items are the tins of artist’s shit (Piero Manzoni, 1961, Merda d’Artista) “worth their

  20. Redefining the L2 Listening Construct within an Integrated Writing Task: Considering the Impacts of Visual-Cue Interpretation and Note-Taking

    Science.gov (United States)

    Cubilo, Justin; Winke, Paula

    2013-01-01

    Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…

  1. Flow visualization

    CERN Document Server

    Merzkirch, Wolfgang

    1974-01-01

    Flow Visualization describes the most widely used methods for visualizing flows. Flow visualization evaluates certain properties of a flow field directly accessible to visual perception. Organized into five chapters, this book first presents the methods that create a visible flow pattern that could be investigated by visual inspection, such as simple dye and density-sensitive visualization methods. It then deals with the application of electron beams and streaming birefringence. Optical methods for compressible flows, hydraulic analogy, and high-speed photography are discussed in other cha

  2. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  3. Visual field

    Science.gov (United States)

    ... your visual field. How the Test is Performed Confrontation visual field exam. This is a quick and ... to achieve this important distinction for online health information and services. Learn more about A.D.A. ...

  4. Data visualization

    CERN Document Server

    Azzam, Tarek

    2013-01-01

    Do you communicate data and information to stakeholders? In Part 1, we introduce recent developments in the quantitative and qualitative data visualization field and provide a historical perspective on data visualization, its potential role in evaluation practice, and future directions. Part 2 delivers concrete suggestions for optimally using data visualization in evaluation, as well as suggestions for best practices in data visualization design. It focuses on specific quantitative and qualitative data visualization approaches that include data dashboards, graphic recording, and geographic information systems (GIS). Readers will get a step-by-step process for designing an effective data dashboard system for programs and organizations, and various suggestions to improve their utility.

  5. Visual Literacy and Visual Thinking.

    Science.gov (United States)

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  6. Visual Literacy and Visual Culture.

    Science.gov (United States)

    Messaris, Paul

    Familiarity with specific images or sets of images plays a role in a culture's visual heritage. Two questions can be asked about this type of visual literacy: Is this a type of knowledge that is worth building into the formal educational curriculum of our schools? What are the educational implications of visual literacy? There is a three-part…

  7. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    Science.gov (United States)

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. [Intraoperative multidimensional visualization].

    Science.gov (United States)

    Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M

    2016-12-01

    Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.

  9. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  10. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  11. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  12. Visual Constructive and Visual-Motor Skills in Deaf Native Signers

    Science.gov (United States)

    Hauser, Peter C.; Cohen, Julie; Dye, Matthew W. G.; Bavelier, Daphne

    2007-01-01

    Visual constructive and visual-motor skills in the deaf population were investigated by comparing performance of deaf native signers (n = 20) to that of hearing nonsigners (n = 20) on the Beery-Buktenica Developmental Test of Visual-Motor Integration, Rey-Osterrieth Complex Figure Test, Wechsler Memory Scale Visual Reproduction subtest, and…

  13. Ultrascale Visualization of Climate Data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Doutriaux, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pugmire, Dave [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Steed, Chad A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Childs, Hank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Krishnan, Harinarayan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Silva, Claudio T. [New York University, New York, NY (United States). Center for Urban Sciences; Santos, Emanuele [Universidade Federal do Ceara, Ceara (Brazil); Koop, David [New York University, New York, NY (United States); Ellqvist, Tommy [New York University, New York, NY (United States); Poco, Jorge [Polytechnic Institute of New York University, New York, NY (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Pletzer, Alexander [Tech-X Corporation, Boulder, CO (United States); Kindig, Dave [Tech-X Corporation, Boulder, CO (United States); Potter, Gerald [National Aeronautics and Space Administration (NASA), Washington, DC (United States); Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA), Washington, DC (United States)

    2013-09-01

    To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.

  14. Visualization Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Evaluates and improves the operational effectiveness of existing and emerging electronic warfare systems. By analyzing and visualizing simulation results...

  15. Distributed Visualization

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...

  16. Visual Impairment

    Science.gov (United States)

    ... site Sitio para adolescentes Body Mind Sexual Health Food & Fitness Diseases & Conditions Infections Drugs & Alcohol School & Jobs Sports Expert Answers (Q&A) Staying Safe Videos for Educators Search English Español Visual Impairment KidsHealth / For Teens / Visual Impairment What's in ...

  17. Visual attention

    NARCIS (Netherlands)

    Evans, K.K.; Horowitz, T.S.; Howe, P.; Pedersini, R.; Reijnen, E.; Pinto, Y.; Wolfe, J.M.

    2011-01-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, ‘visual attention’ describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act

  18. Early vision and visual attention

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije P.

    2003-01-01

    Full Text Available The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT, based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constructs of FIT, like types of visual search and the role of the attention. The following review describes the main studies of early vision and visual attention.

  19. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  20. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  1. Visualization of wind farms

    International Nuclear Information System (INIS)

    Pahlke, T.

    1994-01-01

    With the increasing number of wind energy installations the visual impact of single wind turbines or wind parks is a growing problem for landscape preservation, leading to resistance of local authorities and nearby residents against wind energy projects. To increase acceptance and to form a basis for planning considerations, it is necessary to develop instruments for the visualization of planned wind parks, showing their integration in the landscape. Photorealistic montages and computer animation including video sequences may be helpful in 'getting the picture'. (orig.)

  2. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  3. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  4. Visual cognition

    Science.gov (United States)

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  5. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  7. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  8. Visual cognition

    Energy Technology Data Exchange (ETDEWEB)

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  9. Web-based Data Visualization of the MGClimDeX Climate Model Output: An Integrated Perspective of Climate Change Impact on Natural Resources in Highly Vulnerable Regions.

    Science.gov (United States)

    Martinez-Rey, J.; Brockmann, P.; Cadule, P.; Nangini, C.

    2016-12-01

    Earth System Models allow us to understand the interactions between climate and biogeological processes. These models generate a very large amount of data. These data are usually reduced to a few number of static figures shown in highly specialized scientific publications. However, the potential impacts of climate change demand a broader perspective regarding the ways in which climate model results of this kind are disseminated, particularly in the amount and variety of data, and the target audience. This issue is of great importance particularly for scientific projects that seek a large broadcast with different audiences on their key results. The MGClimDeX project, which assesses the climate change impact on La Martinique island in the Lesser Antilles, will provide tools and means to help the key stakeholders -responsible for addressing the critical social, economic, and environmental issues- to take the appropriate adaptation and mitigation measures in order to prevent future risks associated with climate variability and change, and its role on human activities. The MGClimDeX project will do so by using model output and data visualization techniques within the next year, showing the cross-connected impacts of climate change on various sectors (agriculture, forestry, ecosystems, water resources and fisheries). To address this challenge of representing large sets of data from model output, we use back-end data processing and front-end web-based visualization techniques, going from the conventional netCDF model output stored on hub servers to highly interactive web-based data-powered visualizations on browsers. We use the well-known javascript library D3.js extended with DC.js -a dimensional charting library for all the front-end interactive filtering-, in combination with Bokeh, a Python library to synthesize the data, all framed in the essential HTML+CSS scripts. The resulting websites exist as standalone information units or embedded into journals or scientific

  10. Virtual reality devices integration in scientific visualization software in the VtkVRPN framework; Integration de peripheriques de realite virtuelle dans des applications de visualisation scientifique au sein de la plate-forme VtkVRPN

    Energy Technology Data Exchange (ETDEWEB)

    Journe, G.; Guilbaud, C

    2005-07-01

    A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)

  11. Storytelling and Visualization: An Extended Survey

    OpenAIRE

    Chao Tong; Richard Roberts; Rita Borgo; Sean Walton; Robert S. Laramee; Kodzo Wegba; Aidong Lu; Yun Wang; Huamin Qu; Qiong Luo; Xiaojuan Ma

    2018-01-01

    Throughout history, storytelling has been an effective way of conveying information and knowledge. In the field of visualization, storytelling is rapidly gaining momentum and evolving cutting-edge techniques that enhance understanding. Many communities have commented on the importance of storytelling in data visualization. Storytellers tend to be integrating complex visualizations into their narratives in growing numbers. In this paper, we present a survey of storytelling literature in visual...

  12. Visualizing water

    Science.gov (United States)

    Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.

    2016-12-01

    A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.

  13. AFM visualization of sub-50nm polyplex disposition to the nuclear pore complex without compromising the integrity of the nuclear envelope

    DEFF Research Database (Denmark)

    Andersen, Helene; Parhamifar, Ladan; Hunter, A Christy

    2016-01-01

    that were microinjected into the oocytes of Xenopus laevis, as an example of a non-dividing cell, is exclusive to the nuclear pore complex (NPC). AFM images show NPCs clogged only with sub-50nm polyplexes. This mode of disposition neither altered the morphology/integrity of the nuclear membrane nor the NPC...

  14. VisComposer: A Visual Programmable Composition Environment for Information Visualization

    Directory of Open Access Journals (Sweden)

    Honghui Mei

    2018-03-01

    Full Text Available As the amount of data being collected has increased, the need for tools that can enable the visual exploration of data has also grown. This has led to the development of a variety of widely used programming frameworks for information visualization. Unfortunately, such frameworks demand comprehensive visualization and coding skills and require users to develop visualization from scratch. An alternative is to create interactive visualization design environments that require little to no programming. However, these tools only supports a small portion of visual forms.We present a programmable integrated development environment (IDE, VisComposer, that supports the development of expressive visualization using a drag-and-drop visual interface. VisComposer exposes the programmability by customizing desired components within a modularized visualization composition pipeline, effectively balancing the capability gap between expert coders and visualization artists. The implemented system empowers users to compose comprehensive visualizations with real-time preview and optimization features, and supports prototyping, sharing and reuse of the effects by means of an intuitive visual composer. Visual programming and textual programming integrated in our system allow users to compose more complex visual effects while retaining the simplicity of use. We demonstrate the performance of VisComposer with a variety of examples and an informal user evaluation. Keywords: Information Visualization, Visualization authoring, Interactive development environment

  15. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  16. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  17. Visual cognition

    Energy Technology Data Exchange (ETDEWEB)

    Pinker, S.

    1985-01-01

    This collection of research papers on visual cognition first appeared as a special issue of Cognition: International Journal of Cognitive Science. The study of visual cognition has seen enormous progress in the past decade, bringing important advances in our understanding of shape perception, visual imagery, and mental maps. Many of these discoveries are the result of converging investigations in different areas, such as cognitive and perceptual psychology, artificial intelligence, and neuropsychology. This volume is intended to highlight a sample of work at the cutting edge of this research area for the benefit of students and researchers in a variety of disciplines. The tutorial introduction that begins the volume is designed to help the nonspecialist reader bridge the gap between the contemporary research reported here and earlier textbook introductions or literature reviews.

  18. Visualizing Transformation

    DEFF Research Database (Denmark)

    Pedersen, Pia

    2012-01-01

    Transformation, defined as the step of extracting, arranging and simplifying data into visual form (M. Neurath, 1974), was developed in connection with ISOTYPE (International System Of TYpographic Picture Education) and might well be the most important legacy of Isotype to the field of graphic...... design. Recently transformation has attracted renewed interest because of the book The Transformer written by Robin Kinross and Marie Neurath. My on-going research project, summarized in this paper, identifies and depicts the essential principles of data visualization underlying the process...... of transformation with reference to Marie Neurath’s sketches on the Bilston Project. The material has been collected at the Otto and Marie Neurath Collection housed at the University of Reading, UK. By using data visualization as a research method to look directly into the process of transformation, the project...

  19. Small Unmanned Aircraft Systems Integration into the National Airspace System Visual-Line-of-Sight Human-in-the-Loop Experiment

    Science.gov (United States)

    Trujillo, Anna C.; Ghatas, Rania W.; Mcadaragh, Raymon; Burdette, Daniel W.; Comstock, James R.; Hempley, Lucas E.; Fan, Hui

    2015-01-01

    As part of the Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) project, research on integrating small UAS (sUAS) into the NAS was underway by a human-systems integration (HSI) team at the NASA Langley Research Center. Minimal to no research has been conducted on the safe, effective, and efficient manner in which to integrate these aircraft into the NAS. sUAS are defined as aircraft weighing 55 pounds or less. The objective of this human system integration team was to build a UAS Ground Control Station (GCS) and to develop a research test-bed and database that provides data, proof of concept, and human factors guidelines for GCS operations in the NAS. The objectives of this experiment were to evaluate the effectiveness and safety of flying sUAS in Class D and Class G airspace utilizing manual control inputs and voice radio communications between the pilot, mission control, and air traffic control. The design of the experiment included three sets of GCS display configurations, in addition to a hand-held control unit. The three different display configurations were VLOS, VLOS + Primary Flight Display (PFD), and VLOS + PFD + Moving Map (Map). Test subject pilots had better situation awareness of their vehicle position, altitude, airspeed, location over the ground, and mission track using the Map display configuration. This configuration allowed the pilots to complete the mission objectives with less workload, at the expense of having better situation awareness of other aircraft. The subjects were better able to see other aircraft when using the VLOS display configuration. However, their mission performance, as well as their ability to aviate and navigate, was reduced compared to runs that included the PFD and Map displays.

  20. IMP 2.0: a multi-species functional genomics portal for integration, visualization and prediction of protein functions and networks.

    Science.gov (United States)

    Wong, Aaron K; Krishnan, Arjun; Yao, Victoria; Tadych, Alicja; Troyanskaya, Olga G

    2015-07-01

    IMP (Integrative Multi-species Prediction), originally released in 2012, is an interactive web server that enables molecular biologists to interpret experimental results and to generate hypotheses in the context of a large cross-organism compendium of functional predictions and networks. The system provides biologists with a framework to analyze their candidate gene sets in the context of functional networks, expanding or refining their sets using functional relationships predicted from integrated high-throughput data. IMP 2.0 integrates updated prior knowledge and data collections from the last three years in the seven supported organisms (Homo sapiens, Mus musculus, Rattus norvegicus, Drosophila melanogaster, Danio rerio, Caenorhabditis elegans, and Saccharomyces cerevisiae) and extends function prediction coverage to include human disease. IMP identifies homologs with conserved functional roles for disease knowledge transfer, allowing biologists to analyze disease contexts and predictions across all organisms. Additionally, IMP 2.0 implements a new flexible platform for experts to generate custom hypotheses about biological processes or diseases, making sophisticated data-driven methods easily accessible to researchers. IMP does not require any registration or installation and is freely available for use at http://imp.princeton.edu. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Early vision and visual attention

    OpenAIRE

    Gvozdenović Vasilije P.

    2003-01-01

    The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT), based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constru...

  2. Visual-motor integration and fine motor skills at 6½ years of age and associations with neonatal brain volumes in children born extremely preterm in Sweden: a population-based cohort study.

    Science.gov (United States)

    Bolk, Jenny; Padilla, Nelly; Forsman, Lea; Broström, Lina; Hellgren, Kerstin; Åden, Ulrika

    2018-02-17

    This exploratory study aimed to investigate associations between neonatal brain volumes and visual-motor integration (VMI) and fine motor skills in children born extremely preterm (EPT) when they reached 6½ years of age. Prospective population-based cohort study in Stockholm, Sweden, during 3 years. All children born before gestational age, 27 weeks, during 2004-2007 in Stockholm, without major morbidities and impairments, and who underwent MRI at term-equivalent age. Brain volumes were calculated using morphometric analyses in regions known to be involved in VMI and fine motor functions. VMI was assessed with The Beery-Buktenica Developmental Test of Visual-Motor Integration-sixth edition and fine motor skills were assessed with the manual dexterity subtest from the Movement Assessment Battery for Children-second edition, at 6½ years. Associations between the brain volumes and VMI and fine motor skills were evaluated using partial correlation, adjusted for total cerebral parenchyma and sex. Out of 107 children born at gestational age skills (r=0.54, P=0.01). Associations were also seen between fine motor skills and the volume of the cerebellum (r=0.42, P=0.02), brainstem (r=0.47, P=0.008) and grey matter (r=-0.38, P=0.04). Neonatal brain volumes in areas known to be involved in VMI and fine motor skills were associated with scores for these two functions when children born EPT without major brain lesions or cerebral palsy were evaluated at 6½ years of age. Establishing clear associations between early brain volume alterations and later VMI and/or fine motor skills could make early interventions possible. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Visual attention.

    Science.gov (United States)

    Evans, Karla K; Horowitz, Todd S; Howe, Piers; Pedersini, Roccardo; Reijnen, Ester; Pinto, Yair; Kuzmova, Yoana; Wolfe, Jeremy M

    2011-09-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, 'visual attention' describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. This selection permits the reduction of complexity and informational overload. Selection can be determined both by the 'bottom-up' saliency of information from the environment and by the 'top-down' state and goals of the perceiver. Attentional effects can take the form of modulating or enhancing the selected information. A central role for selective attention is to enable the 'binding' of selected information into unified and coherent representations of objects in the outside world. In the overview on visual attention presented here we review the mechanisms and consequences of selection and inhibition over space and time. We examine theoretical, behavioral and neurophysiologic work done on visual attention. We also discuss the relations between attention and other cognitive processes such as automaticity and awareness. WIREs Cogni Sci 2011 2 503-514 DOI: 10.1002/wcs.127 For further resources related to this article, please visit the WIREs website. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Visualizing Series

    Science.gov (United States)

    Unal, Hasan

    2008-01-01

    The importance of visualisation and multiple representations in mathematics has been stressed, especially in a context of problem solving. Hanna and Sidoli comment that "Diagrams and other visual representations have long been welcomed as heuristic accompaniments to proof, where they not only facilitate the understanding of theorems and their…

  5. Plant Measurement, Sampling and Analysis for Accountancy Purposes with Particular Reference to Separation Plants at Windscale; Mesures, Echantillonnages et Analyses en Usine a des Fins Comptables, Notamment dans les Installations de Separation de Windscale; Izmereniya, vzyatie obraztsov i analizy v tselyakh ucheta na opyte ustanovki razdeleniya radioizotopov v uindskejle; Medicion, Muestreo y Analisis para Fines Contables, Especialmente en las Plantas de Separacion de Windscale

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, A. S.; Elliott, F.; Powell, R.; Swinburn, K. A. [United Kingdom Atomic Energy Authority, Windscale and Calder Works, Cumberland (United Kingdom)

    1966-02-15

    requires to be corrected for. Precisions of all the methods are given and the methods actually used for the New Separation Plant are indicated. (author) [French] Tous les chiffres interessant la comptabiUte des matieres speciales comportent un ou plusieurs des parametres suivants: mesures, echantillonnages et analyses. Pour ce qui est de la mesure en usine, il est preferable de proceder a des pesages toutes les fois que cela est possible; ainsi, le calcul du plutonium entrant dans la nouvelle usine de separation est fonde sur le poids des barres d'uranium entrantes. Les auteurs examinent les diverses methodes de mesure des volumes, a savoir: a) 'pneumercator', et b) methode des radioindicateurs utilisant le radiocesium. Le memoire indique l'exactitude et la precision reconnues a chaque methode. On procede a l'echantillonnage des solutions par lots, en utilisant des pipettes a vide, apres complete homogeneisation. Pour les operations en continu, qui exigent une grande exactitude, on a concu une echantillonneuse a fonctionnement continu qui est utilisee sur le circuit d'entree de la nouvelle usine de separation. La methode habituelle d'echantillonnage pratiquee a Windscale consiste a forer des lingots metalliques; pour d'autres solides, on homogeneise dans toute la mesure du possible (pour l'oxyde de plutonium, par exemple, on utilise un melangeur conique en Y), apres quoi on procede a l'echantillonnage. Quant a l'analyse chimique, la precision requise pour une methode donnee depend du nombre de dosages effectues au cours de chaque periode comptable. Ainsi, une methode exacte mais peu precise exige un grand nombre d'analyses. Il peut etre plus economique de reduire le nombre de dosages et d'augmenter leur precision. Les auteurs examinent en detail les methodes de dosage du plutonium, notamment les suivantes: a) chimie des corps radioactifs, b) colorimetrie utilisant du thoronol, c) separation du plutonium complexe avec EDTA et retitrage de l'EeDTA excedentaire, d

  6. Getting the picture: A mixed-methods inquiry into how visual representations are interpreted by students, incorporated within textbooks, and integrated into middle-school science classrooms

    Science.gov (United States)

    Lee, Victor Raymond

    Modern-day middle school science textbooks are heavily populated with colorful images, technical diagrams, and other forms of visual representations. These representations are commonly perceived by educators to be useful aids to support student learning of unfamiliar scientific ideas. However, as the number of representations in science textbooks has seemingly increased in recent decades, concerns have been voiced that many current of these representations are actually undermining instructional goals; they may be introducing substantial conceptual and interpretive difficulties for students. To date, very little empirical work has been done to examine how the representations used in instructional materials have changed, and what influences these changes exert on student understanding. Furthermore, there has also been limited attention given to the extent to which current representational-use routines in science classrooms may mitigate or limit interpretive difficulties. This dissertation seeks to do three things: First, it examines the nature of the relationship between published representations and students' reasoning about the natural world. Second, it considers the ways in which representations are used in textbooks and how that has changed over a span of five decades. Third, this dissertation provides an in-depth look into how middle school science classrooms naturally use these visual representations and what kinds of support are being provided. With respect to the three goals of this dissertation, three pools of data were collected and analyzed for this study. First, interview data was collected in which 32 middle school students interpreted and reasoned with a set of more and less problematic published textbook representations. Quantitative analyses of the interview data suggest that, counter to what has been anticipated in the literature, there were no significant differences in the conceptualizations of students in the different groups. An accompanying

  7. Proposal of the visual inspection of the integrity of the storage cells of spent fuel from the nuclear power plant of Laguna Verde; Propuesta para la inspeccion visual de la integridad de las celdas de almacenamiento de combustible gastado de la Central Laguna Verde (CLV)

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez M, J. L.; Rivero G, T.; Merino C, F. J. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Santander C, L. E., E-mail: francisco.merino@inin.gob.mx [Comision Federal de Electricidad, Central Nucleoelectrica Laguna Verde, Carretera Veracruz-Medellin Km 7.5, Col. Dos Bocas, 94271 Medellin, Veracruz (Mexico)

    2015-09-15

    As part of the evaluation of the structural integrity of the components of nuclear plants, particularly those applying for life extension is necessary to carry out inspections and nondestructive testing to determine the state meet. In many cases these activities are carried out in areas with high levels of radiation and contamination difficult to access, so that are required to use equipment or robotic systems operated remotely. Among others, the frames and cells of the storage pools for spent fuel are structures subject to a program of tests and inspections, and become relevant because the nuclear power plant of Laguna Verde (NPP-LV) is processing the license to extend the operational life of its reactors. Of non-destructive testing can be used to verify the physical condition of the frames and storage cells, is the remote visual inspection which is a test that allows determine the physical integrity of the components by one or more video cameras designed to applications in underwater environments with radiation, and are used to identify and locate adverse conditions such as ampoules, protuberances, pitting, cracks, stains or buckling, which could affect the three main functions for which the store components are designed: to maintain the physical integrity of spent fuels, store them properly guaranteeing their free insertion and removal, and ensure that the store as a whole meets the criticality criteria that k{sub eff} is less than 0.95 throughout the life of the plant. This paper describes a proposal to carry out the visual inspection of the storage cells of spent fuel from the NPP-LV using a probe including one or more video cameras along with your recorder, and its corresponding control program. It is noted that due to the obtained results, the nuclear power plant personnel can make decisions regarding remedial actions or applying complementary methods to verify that the cells and frames have not lost their physical integrity, or in particular that the cover

  8. SU-F-I-19: MRI Positive Contrast Visualization of Prostate Brachytherapy Seeds Using An Integrated Laplacian-Based Phase Processing

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, A; Safigholi, H [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Center, Toronto, ON (Canada); Nosrati, R [Sunnybrook Health Sciences Center, Toronto, ON (Canada); Ryerson University, Toronto, ON (Canada); Owrangi, A; Morton, G [Sunnybrook Health Sciences Center, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada); Song, W [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Center, Toronto, ON (Canada); Ryerson University, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: To propose a new method that provides a positive contrast visualization of the prostate brachytherapy seeds using the phase information from MR images. Additionally, the feasibility of using the processed phase information to distinguish seeds from calcifications is explored. Methods: A gel phantom was constructed using 2% agar dissolved in 1 L of distilled water. Contrast agents were added to adjust the relaxation times. Four iodine-125 (Eckert & Ziegler SML86999) dummy seeds were placed at different orientations with respect to the main magnetic field (B0). Calcifications were obtained from a sheep femur cortical bone due to its close similarity to human bone tissue composition. Five samples of calcifications were shaped into different dimensions with lengths ranging between 1.2 – 6.1 mm.MR imaging was performed on a 3T Philips Achieva using an 8-channel head coil. Eight images were acquired at eight echo-times using a multi-gradient echo sequence. Spatial resolution was 0.7 × 0.7 × 2 mm, TR/TE/dTE = 20.0/2.3/2.3 ms and BW = 541 Hz/pixel. Complex images were acquired and fed into a two-step processing pipeline: the first includes phase unwrapping and background phase removal using Laplacian operator (Wei et al. 2013). The second step applies a specific phase mask on the resulting tissue phase from the first step to provide the desired positive contrast of the seeds and to, potentially, differentiate them from the calcifications. Results: The phase-processing was performed in less than 30 seconds. The proposed method has successfully resulted in a positive contrast of the brachytherapy seeds. Additionally, the final processed phase image showed difference between the appearance of seeds and calcifications. However, the shape of the seeds was slightly distorted compared to the original dimensions. Conclusion: It is feasible to provide a positive contrast of the seeds from MR images using Laplacian operator-based phase processing.

  9. Sledge-Hammer Integration

    Science.gov (United States)

    Ahner, Henry

    2009-01-01

    Integration (here visualized as a pounding process) is mathematically realized by simple transformations, successively smoothing the bounding curve into a straight line and the region-to-be-integrated into an area-equivalent rectangle. The relationship to Riemann sums, and to the trapezoid and midpoint methods of numerical integration, is…

  10. Visual Storytelling

    OpenAIRE

    Ting-Hao; Huang; Ferraro, Francis; Mostafazadeh, Nasrin; Misra, Ishan; Agrawal, Aishwarya; Devlin, Jacob; Girshick, Ross; He, Xiaodong; Kohli, Pushmeet; Batra, Dhruv; Zitnick, C. Lawrence; Parikh, Devi; Vanderwende, Lucy; Galley, Michel

    2016-01-01

    We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as prov...

  11. Flow visualization

    International Nuclear Information System (INIS)

    Weinstein, L.M.

    1991-01-01

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities. 8 refs

  12. Normative data set identifying properties of the macula across age groups: integration of visual function and retinal structure with microperimetry and spectral-domain optical coherence tomography.

    Science.gov (United States)

    Sabates, Felix N; Vincent, Ryan D; Koulen, Peter; Sabates, Nelson R; Gallimore, Gary

    2011-01-01

    A normative database of functional and structural parameters of the macula from normal subjects was established to identify reference points for the diagnosis of patients with macular disease using microperimetry and scanning laser ophthalmoscope/spectral-domain optical coherence tomography (SD-OCT). This was a community-based, prospective, cross-sectional study of 169 eyes from subjects aged 21 years to 85 years with best-corrected visual acuity of 20/25 or better and without any ocular disease. Full-threshold macular microperimetry combined with the acquisition of structural parameters of the macula with scanning laser ophthalmoscope/SD-OCT was recorded (SD-OCT/scanning laser ophthalmoscope with add-on Microperimetry module; OPKO). Fixation, central, subfield, and mean retinal thickness were acquired together with macular sensitivity function. Thickness and sensitivity as primary outcome measures were mapped and superimposed correlating topographically differentiated macular thickness with sensitivity. Statistical evaluation was performed with age, gender, and ethnicity as covariates. Subfield and mean retinal thickness and sensitivity were measured with macular microperimetry combined with SD-OCT and differentiated by macular topography and subjects' age, gender, and ethnicity. Mean retinal sensitivity and thickness were calculated for 169 healthy eyes (mean age, 48 ± 17 years). A statistically significant decrease in sensitivity was found only in the age group of participants ≥ 70 years and in peripheral portions of the macula in individuals aged ≥60 years and was more pronounced in the area surrounding the fovea than in the center of the macula, while retinal thickness did not change with age. No statistically significant differences in the primary outcome measures or their correlations were found when using gender or ethnicity as a covariate. A database for normal macular thickness and sensitivity was generated with a combined microperimetry SD

  13. Engineering visualization utilizing advanced animation

    Science.gov (United States)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  14. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  15. Visualizing Contour Trees within Histograms

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Many of the topological features of the isosurfaces of a scalar volume field can be compactly represented by its contour tree. Unfortunately, the contour trees of most real-world volume data sets are too complex to be visualized by dot-and-line diagrams. Therefore, we propose a new visualization...... that is suitable for large contour trees and efficiently conveys the topological structure of the most important isosurface components. This visualization is integrated into a histogram of the volume data; thus, it offers strictly more information than a traditional histogram. We present algorithms...... to automatically compute the graph layout and to calculate appropriate approximations of the contour tree and the surface area of the relevant isosurface components. The benefits of this new visualization are demonstrated with the help of several publicly available volume data sets....

  16. Deploying web-based visual exploration tools on the grid

    Energy Technology Data Exchange (ETDEWEB)

    Jankun-Kelly, T.J.; Kreylos, Oliver; Shalf, John; Ma, Kwan-Liu; Hamann, Bernd; Joy, Kenneth; Bethel, E. Wes

    2002-02-01

    We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization.

  17. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  18. Assessment of visual communication by information theory

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  19. Information, entropy, and fidelity in visual communication

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1992-10-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering an display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  20. Information, entropy and fidelity in visual communication

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering and display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  1. The effect of early visual deprivation on the neural bases of multisensory processing

    OpenAIRE

    Guerreiro, Maria J. S.; Putzar, Lisa; Röder, Brigitte

    2015-01-01

    Animal studies have shown that congenital visual deprivation reduces the ability of neurons to integrate cross-modal inputs. Guerreiro et al. reveal that human patients who suffer transient congenital visual deprivation because of cataracts lack multisensory integration in auditory and multisensory areas as adults, and suppress visual processing during audio-visual stimulation.

  2. Perceptual integration without conscious access

    NARCIS (Netherlands)

    Fahrenfort, Johannes J.; Van Leeuwen, Jonathan; Olivers, Christian N.L.; Hogendoorn, Hinze

    2017-01-01

    The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is

  3. Visualization rhetoric: framing effects in narrative visualization.

    Science.gov (United States)

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  4. Spatial integration and cortical dynamics.

    OpenAIRE

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-01

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells wi...

  5. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  6. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  7. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    Science.gov (United States)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  8. The Economical Application of Non-Destructive Testing to Reactor Components, Especially Jacket Tubing; Avantages Economiques du Controle Non Destructif des Pieces de Reacteurs, Notamment des Tubes de Gainage; Ehkonomicheskoe primenenie nedestruktivnykh ispytanij dlya reaktornykh komponentov, v chastnosti obolochechnykh trub; Aplicacion en Condiciones Economicas de Ensayos No Destructivos a las Piezas de los Reactores, en Especial a los Tubos de Revestimiento

    Energy Technology Data Exchange (ETDEWEB)

    Renken, C. J. [Metallurgy Division Argonne National Laboratory Argonne, IL (United States)

    1965-10-15

    destructifs dans l'etablissement des specifications. Il incombe aussi au fabricant de tirer parti des avantages que presentent les essais non destructifs pour maintenir la qualite du produit au cours des divers stades de fabrication, et d'utiliser les resultats des essais pour determiner les stades 043E 0439 l'apparition de defauts est le plus probable. Il arrive trequemmennt que des essais non destructifs au debut des operations de fabrication ne puissent pas etre remplaces, economiquement ou non, par une inspection du produit fini ou semi-fini. L'auteur cite des exemples a l'appui de cette consideration, notamment en ce qui concerne les tubes de gainage du combustible et les circuits caloporteurs. Il decrit de facon assez detaillee l'application des divers essais non destructifs au cours de la mise au point de gaines et de canaux. H compare les couts de fabrication et d'inspection de plusieurs modeles de gaines de combustible utilises par le Laboratoire national d'Argonne. Bien que le controle du produit fini puisse etre reduit a un minimum a la suite de ces essais, il ne peut pas etre elimine entierement dans tous le cas. L'auteur discute en'detail, du point de vue des economies, les essais des plaques et des tubes, particulierement des seconds. Son examen porte principalement sur les pieces en acier inoxydable, en Zircaloy et en certains metaux refractaires. Il montre, par divers exemples, que si la radiographie et l'emploi de liquides penetrants peuvent constituer des mesures utiles ou meme essentielles au cours des essais, l'inspection critique des tubes % paroi mince doit etre effectuee habituellement, soit par les ultrasons, soit par une methode electromagnetique, pour des raisons a la fois techniques et economiques. L'auteur decrit le domaine optimum d'application de ces deux methodes ainsi que la vaste gamme dans laquelle les resultats obtenus avec des instruments ultrasonores et electromagnetiques bien concus et convenablement utilises sont pratiquement equivalents

  9. Multisensory integration, sensory substitution and visual rehabilitation

    DEFF Research Database (Denmark)

    Proulx, Michael J; Ptito, Maurice; Amedi, Amir

    2014-01-01

    Sensory substitution has advanced remarkably over the past 35 years since first introduced to the scientific literature by Paul Bach-y-Rita. In this issue dedicated to his memory, we describe a collection of reviews that assess the current state of neuroscience research on sensory substitution...

  10. Visual literacy in HCI

    NARCIS (Netherlands)

    Overton, K.; Sosa-Tzec, O.; Smith, N.; Blevis, E.; Odom, W.; Hauser, S.; Wakkary, R.L.

    2016-01-01

    The goal of this workshop is to develop ideas about and expand a research agenda for visual literacy in HCI. By visual literacy, we mean the competency (i) to understand visual materials, (ii) to create visuals materials, and (iii) to think visually [2]. There are three primary motivations for this

  11. A Visual Profile of Queensland Indigenous Children.

    Science.gov (United States)

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2016-03-01

    Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying

  12. Comparison of animated jet stream visualizations

    Science.gov (United States)

    Nocke, Thomas; Hoffmann, Peter

    2016-04-01

    The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).

  13. Impaired Visual Motor Coordination in Obese Adults.

    LENUS (Irish Health Repository)

    Gaul, David

    2016-09-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly (p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability (p < 0.05), and a larger amplitude (p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  14. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    Directory of Open Access Journals (Sweden)

    Anne-Sophie Darmaillacq

    2017-06-01

    Full Text Available Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  15. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish.

    Science.gov (United States)

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E; Dickel, Ludovic

    2017-01-01

    Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e -vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  16. Sketchy Rendering for Information Visualization.

    Science.gov (United States)

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  17. Storage and binding of object features in visual working memory

    OpenAIRE

    Bays, Paul M; Wu, Emma Y; Husain, Masud

    2010-01-01

    An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory.

  18. Visual Representations of the Water Cycle in Science Textbooks

    Science.gov (United States)

    Vinisha, K.; Ramadas, J.

    2013-01-01

    Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…

  19. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  20. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  1. Storytelling and Visualization: An Extended Survey

    Directory of Open Access Journals (Sweden)

    Chao Tong

    2018-03-01

    Full Text Available Throughout history, storytelling has been an effective way of conveying information and knowledge. In the field of visualization, storytelling is rapidly gaining momentum and evolving cutting-edge techniques that enhance understanding. Many communities have commented on the importance of storytelling in data visualization. Storytellers tend to be integrating complex visualizations into their narratives in growing numbers. In this paper, we present a survey of storytelling literature in visualization and present an overview of the common and important elements in storytelling visualization. We also describe the challenges in this field as well as a novel classification of the literature on storytelling in visualization. Our classification scheme highlights the open and unsolved problems in this field as well as the more mature storytelling sub-fields. The benefits offer a concise overview and a starting point into this rapidly evolving research trend and provide a deeper understanding of this topic.

  2. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  3. Direct Visual Editing of Node Attributes in Graphs

    Directory of Open Access Journals (Sweden)

    Christian Eichner

    2016-10-01

    Full Text Available There are many expressive visualization techniques for analyzing graphs. Yet, there is only little research on how existing visual representations can be employed to support data editing. An increasingly relevant task when working with graphs is the editing of node attributes. We propose an integrated visualize-and-edit approach to editing attribute values via direct interaction with the visual representation. The visualize part is based on node-link diagrams paired with attribute-dependent layouts. The edit part is as easy as moving nodes via drag-and-drop gestures. We present dedicated interaction techniques for editing quantitative as well as qualitative attribute data values. The benefit of our novel integrated approach is that one can directly edit the data while the visualization constantly provides feedback on the implications of the data modifications. Preliminary user feedback indicates that our integrated approach can be a useful complement to standard non-visual editing via external tools.

  4. Visual Literacy in Bloom: Using Bloom's Taxonomy to Support Visual Learning Skills

    Science.gov (United States)

    Arneson, Jessie B.; Offerdahl, Erika G.

    2018-01-01

    "Vision and Change" identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of…

  5. Is Visual Imagery Really Visual? Overlooked Evidence from Neuropsychology.

    Science.gov (United States)

    1987-08-07

    the study of imagery. British Journal of Psychology, 47 101-114 Bauer,R. M.. & Rubens. A B (1985). Agnosia In K. M. Heilman & E. Valenstein (Ed Clinical...Neuropsychology. New York: Oxford University Press. 2nd edition. Beauvois. M.F . & Saillant. B (1985) Optic aphasia for colours and colour agnosia A...integrative visual agnosia . Brain, Roland. P.E. (1982). Cortical regulation of selective attention in man. Journal of Neuroohysiology, 48. 1059-1078

  6. A link between visual disambiguation and visual memory.

    Science.gov (United States)

    Hegdé, Jay; Kersten, Daniel

    2010-11-10

    Sensory information in the retinal image is typically too ambiguous to support visual object recognition by itself. Theories of visual disambiguation posit that to disambiguate, and thus interpret, the incoming images, the visual system must integrate the sensory information with previous knowledge of the visual world. However, the underlying neural mechanisms remain unclear. Using functional magnetic resonance imaging (fMRI) of human subjects, we have found evidence for functional specialization for storing disambiguating information in memory versus interpreting incoming ambiguous images. Subjects viewed two-tone, "Mooney" images, which are typically ambiguous when seen for the first time but are quickly disambiguated after viewing the corresponding unambiguous color images. Activity in one set of regions, including a region in the medial parietal cortex previously reported to play a key role in Mooney image disambiguation, closely reflected memory for previously seen color images but not the subsequent disambiguation of Mooney images. A second set of regions, including the superior temporal sulcus, showed the opposite pattern, in that their responses closely reflected the subjects' percepts of the disambiguated Mooney images on a stimulus-to-stimulus basis but not the memory of the corresponding color images. Functional connectivity between the two sets of regions was stronger during those trials in which the disambiguated percept was stronger. This functional interaction between brain regions that specialize in storing disambiguating information in memory versus interpreting incoming ambiguous images may represent a general mechanism by which previous knowledge disambiguates visual sensory information.

  7. Visualizing light with electrons

    Science.gov (United States)

    Fitzgerald, J. P. S.; Word, R. C.; Koenenkamp, R.

    2014-03-01

    In multiphoton photoemission electron microscopy (nP-PEEM) electrons are emitted from surfaces at a rate proportional to the surface electromagnetic field amplitude. We use 2P-PEEM to give nanometer scale visualizations of light of diffracted and waveguide fields around various microstructures. We use Fourier analysis to determine the phase and amplitude of surface fields in relation to incident light from the interference patterns. To provide quick and intuitive simulations of surface fields, we employ two dimensional Fresnel-Kirchhoff integration, a technique based on freely propagating waves and Huygens' principle. We find generally good agreement between simulations and experiment. Additionally diffracted wave simulations exhibit greater phase accuracy, indicating that these waves are well represented by a two dimensional approximation. The authors gratefully acknowledge funding of this research by the US-DOE Basic Science Office under Contract DE-FG02-10ER46406.

  8. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  9. Introduction of computing in physics learning visual programing

    International Nuclear Information System (INIS)

    Kim, Cheung Seop

    1999-12-01

    This book introduces physics and programing, foundation of visual basic, grammar of visual basic, visual programing, solution of equation, calculation of matrix, solution of simultaneous equation, differentiation, differential equation, simultaneous differential equation and second-order differential equation, integration and solution of partial differential equation. It also covers basic language, terms of visual basic, usage of method, graphic method, step by step method, fails-position method, Gauss elimination method, difference method and Euler method.

  10. Postdictive modulation of visual orientation.

    Directory of Open Access Journals (Sweden)

    Takahiro Kawabe

    Full Text Available The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1 or whether the target was vertical or not (Supplementary experiment. The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation. The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2. Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  11. Postdictive modulation of visual orientation.

    Science.gov (United States)

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  12. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  13. Introduction: Critical Visual Theory

    Directory of Open Access Journals (Sweden)

    Peter Ludes

    2014-03-01

    Full Text Available The studies selected for publication in this special issue on Critical Visual Theory can be divided into three thematic groups: (1 image making as power making, (2 commodification and recanonization, and (3 approaches to critical visual theory. The approaches to critical visual theory adopted by the authors of this issue may be subsumed under the following headings (3.1 critical visual discourse and visual memes in general and Anonymous visual discourse in particular, (3.2 collective memory and gendered gaze, and (3.3 visual capitalism, global north and south.

  14. Cortical visual impairment

    OpenAIRE

    Koželj, Urša

    2013-01-01

    In this thesis we discuss cortical visual impairment, diagnosis that is in the developed world in first place, since 20 percent of children with blindness or low vision are diagnosed with it. The objectives of the thesis are to define cortical visual impairment and the definition of characters suggestive of the cortical visual impairment as well as to search for causes that affect the growing diagnosis of cortical visual impairment. There are a lot of signs of cortical visual impairment. ...

  15. Relativity of Visual Communication

    OpenAIRE

    Arto Mutanen

    2016-01-01

    Communication is sharing and conveying information. In visual communication especially visual messages have to be formulated and interpreted. The interpretation is relative to a method of information presentation method which is human construction. This holds also in the case of visual languages. The notions of syntax and semantics for visual languages are not so well founded as they are for natural languages. Visual languages are both syntactically and semantically dense. The density is conn...

  16. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  17. Introduction to Vector Field Visualization

    Science.gov (United States)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  18. Cloud-based Networked Visual Servo Control

    OpenAIRE

    Wu, Haiyan; Lu, Lei; Chen, Chih-Chung; Hirche, Sandra; Kühnlenz, Kolja

    2013-01-01

    The performance of vision-based control systems, in particular of highly dynamic vision-based motion control systems, is often limited by the low sampling rate of the visual feedback caused by the long image processing time. In order to overcome this problem, the networked visual servo control, which integrates networked computational resources for cloud image processing, is considered in this article. The main contributions of this article are i) a real-time transport protocol for transmitti...

  19. Visualization analysis and design

    CERN Document Server

    Munzner, Tamara

    2015-01-01

    Visualization Analysis and Design provides a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. The book breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It walks readers through the use of space and color to visually encode data in a view, the trade-offs between changing a single view and using multiple linked views, and the ways to reduce the amount of data shown in each view. The book concludes with six case stu...

  20. Relativity of Visual Communication

    Directory of Open Access Journals (Sweden)

    Arto Mutanen

    2016-03-01

    Full Text Available Communication is sharing and conveying information. In visual communication especially visual messages have to be formulated and interpreted. The interpretation is relative to a method of information presentation method which is human construction. This holds also in the case of visual languages. The notions of syntax and semantics for visual languages are not so well founded as they are for natural languages. Visual languages are both syntactically and semantically dense. The density is connected to the compositionality of the (pictorial languages. In the paper Charles Sanders Peirce’s theory of signs will be used in characterizing visual languages. This allows us to relate visual languages to natural languages. The foundation of information presentation methods for visual languages is the logic of perception, but only if perception is understood as propositional perception. This allows us to understand better the relativity of information presentation methods, and hence to evaluate the cultural relativity of visual communication.

  1. Professional Visual Basic 2010 and .NET 4

    CERN Document Server

    Sheldon, Bill; Sharkey, Kent

    2010-01-01

    Intermediate and advanced coverage of Visual Basic 2010 and .NET 4 for professional developers. If you've already covered the basics and want to dive deep into VB and .NET topics that professional programmers use most, this is your book. You'll find a quick review of introductory topics-always helpful-before the author team of experts moves you quickly into such topics as data access with ADO.NET, Language Integrated Query (LINQ), security, ASP.NET web programming with Visual Basic, Windows workflow, threading, and more. You'll explore all the new features of Visual Basic 2010 as well as all t

  2. Using Visualization in Cockpit Decision Support Systems

    Energy Technology Data Exchange (ETDEWEB)

    Aragon, Cecilia R.

    2005-07-01

    In order to safely operate their aircraft, pilots must makerapid decisions based on integrating and processing large amounts ofheterogeneous information. Visual displays are often the most efficientmethod of presenting safety-critical data to pilots in real time.However, care must be taken to ensure the pilot is provided with theappropriate amount of information to make effective decisions and notbecome cognitively overloaded. The results of two usability studies of aprototype airflow hazard visualization cockpit decision support systemare summarized. The studies demonstrate that such a system significantlyimproves the performance of helicopter pilots landing under turbulentconditions. Based on these results, design principles and implicationsfor cockpit decision support systems using visualization arepresented.

  3. Visualizing the Verbal and Verbalizing the Visual.

    Science.gov (United States)

    Braden, Roberts A.

    This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…

  4. A Visual Test for Visual "Literacy."

    Science.gov (United States)

    Messaris, Paul

    Four different principles of visual manipulation constitute a minimal list of what a visually "literate" viewer should know about, but certain problems exist which are inherent in measuring viewers' awareness of each of them. The four principles are: (1) paraproxemics, or camera work which derives its effectiveness from an analogy to the…

  5. Interaction for visualization

    CERN Document Server

    Tominski, Christian

    2015-01-01

    Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction.The goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. This view comprises five key as

  6. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  7. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    Science.gov (United States)

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  8. Integrated Systems Engineering Framework (ISEF)

    Data.gov (United States)

    Federal Laboratory Consortium — The ISEF is an integrated SE framework built to create and capture knowledge using a decision-centric method, high-quality data visualizations, intuitive navigation...

  9. Visual Artist or Visual Designer? Visual Communication Design Education

    OpenAIRE

    Arsoy, Aysu

    2010-01-01

    ABSTRACT: Design tools and contents have been digitalized, forming the contemporary fields of the visual arts and design. Corporate culture demands techno-social experts who understand the arts, design, culture and society, while also having a high level of technological proficiency. New departments have opened offering alternatives in art and design education such as Visual Communication Design (VCD) and are dedicated to educating students in the practical aspect of using digital technologi...

  10. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    Science.gov (United States)

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  11. Perceptual integration without conscious access.

    Science.gov (United States)

    Fahrenfort, Johannes J; van Leeuwen, Jonathan; Olivers, Christian N L; Hogendoorn, Hinze

    2017-04-04

    The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is required to complete perceptual integration. To investigate this question, we manipulated access to consciousness using the attentional blink. We show that, behaviorally, the attentional blink impairs conscious decisions about the presence of integrated surface structure from fragmented input. However, despite conscious access being impaired, the ability to decode the presence of integrated percepts remains intact, as shown through multivariate classification analyses of electroencephalogram (EEG) data. In contrast, when disrupting perception through masking, decisions about integrated percepts and decoding of integrated percepts are impaired in tandem, while leaving feedforward representations intact. Together, these data show that access consciousness and perceptual integration can be dissociated.

  12. Visual acuity test

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003396.htm Visual acuity test To use the sharing features on this page, please enable JavaScript. The visual acuity test is used to determine the smallest ...

  13. The Visual System

    Medline Plus

    Full Text Available ... to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the special health problems and requirements of the blind.” ... Clinical Studies Publications Catalog Photos ...

  14. Topological Methods for Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United Stat

    2016-04-07

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  15. The Visual System

    Medline Plus

    Full Text Available ... National Eye Institute’s mission is to “conduct and support research, training, health information dissemination, and other programs with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, ...

  16. Constructing visual representations

    DEFF Research Database (Denmark)

    Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh

    2014-01-01

    tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

  17. The Visual System

    Medline Plus

    Full Text Available ... NIH), the National Eye Institute’s mission is to “conduct and support research, training, health information dissemination, and other programs with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of ...

  18. Visualization of Social Networks

    NARCIS (Netherlands)

    Boertjes, E.M.; Kotterink, B.; Jager, E.J.

    2011-01-01

    Current visualizations of social networks are mostly some form of node-link diagram. Depending on the type of social network, this can be some treevisualization with a strict hierarchical structure or a more generic network visualization.

  19. Visual Control of Locomotion

    National Research Council Canada - National Science Library

    Loomis, Jack M; Beall, Andrew C

    2005-01-01

    The accomplishments were threefold. First, a software tool for rendering virtual environments was developed, a tool useful for other researchers interested in visual perception and visual control of action...

  20. Visual explorer facilitator's guide

    CERN Document Server

    Palus, Charles J

    2010-01-01

    Grounded in research and practice, the Visual Explorer™ Facilitator's Guide provides a method for supporting collaborative, creative conversations about complex issues through the power of images. The guide is available as a component in the Visual Explorer Facilitator's Letter-sized Set, Visual Explorer Facilitator's Post card-sized Set, Visual Explorer Playing Card-sized Set, and is also available as a stand-alone title for purchase to assist multiple tool users in an organization.