WorldWideScience

Sample records for visual performance model

  1. A human visual model-based approach of the visual attention and performance evaluation

    Science.gov (United States)

    Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique

    2005-03-01

    In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.

  2. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  3. Enhanced visual performance in obsessive compulsive personality disorder.

    Science.gov (United States)

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  4. A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions.

    Science.gov (United States)

    Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M

    2017-03-01

    This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.

  5. Modeling visual problem solving as analogical reasoning.

    Science.gov (United States)

    Lovett, Andrew; Forbus, Kenneth

    2017-01-01

    We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Peripheral visual performance enhancement by neurofeedback training.

    Science.gov (United States)

    Nan, Wenya; Wan, Feng; Lou, Chin Ian; Vai, Mang I; Rosa, Agostinho

    2013-12-01

    Peripheral visual performance is an important ability for everyone, and a positive inter-individual correlation is found between the peripheral visual performance and the alpha amplitude during the performance test. This study investigated the effect of alpha neurofeedback training on the peripheral visual performance. A neurofeedback group of 13 subjects finished 20 sessions of alpha enhancement feedback within 20 days. The peripheral visual performance was assessed by a new dynamic peripheral visual test on the first and last training day. The results revealed that the neurofeedback group showed significant enhancement of the peripheral visual performance as well as the relative alpha amplitude during the peripheral visual test. It was not the case in the non-neurofeedback control group, which performed the tests within the same time frame as the neurofeedback group but without any training sessions. These findings suggest that alpha neurofeedback training was effective in improving peripheral visual performance. To the best of our knowledge, this is the first study to show evidence for performance improvement in peripheral vision via alpha neurofeedback training.

  7. Visualization and Analysis of Climate Simulation Performance Data

    Science.gov (United States)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and

  8. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real......Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....

  9. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  10. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

    Directory of Open Access Journals (Sweden)

    Na Shu

    Full Text Available Humans can easily understand other people's actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1, and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model.

  11. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    Science.gov (United States)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  12. Choosing colors for map display icons using models of visual search.

    Science.gov (United States)

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  13. Effect of the small-world structure on encoding performance in the primary visual cortex: an electrophysiological and modeling analysis.

    Science.gov (United States)

    Shi, Li; Niu, Xiaoke; Wan, Hong

    2015-05-01

    The biological networks have been widely reported to present small-world properties. However, the effects of small-world network structure on population's encoding performance remain poorly understood. To address this issue, we applied a small world-based framework to quantify and analyze the response dynamics of cell assemblies recorded from rat primary visual cortex, and further established a population encoding model based on small world-based generalized linear model (SW-GLM). The electrophysiological experimental results show that the small world-based population responses to different topological shapes present significant variation (t test, p 0.8), while no significant variation was found for control networks without considering their spatial connectivity (t test, p > 0.05; effect size: Hedge's g < 0.5). Furthermore, the numerical experimental results show that the predicted response under SW-GLM is more accurate and reliable compared to the control model without small-world structure, and the decoding performance is also improved about 10 % by taking the small-world structure into account. The above results suggest the important role of the small-world neural structure in encoding visual information for the neural population by providing electrophysiological and theoretical evidence, respectively. The study helps greatly to well understand the population encoding mechanisms of visual cortex.

  14. Integrated and visual performance evaluation model for thermal systems and its application to an HTGR cogeneration system

    International Nuclear Information System (INIS)

    Qi, Zhang; Yoshikawa, Hidekazu; Ishii, Hirotake; Shimoda, Hiroshi

    2010-01-01

    An integrated and visual model EXCEM-MFM (EXergy, Cost, Energy and Mass - Multilevel Flow Model) has been proposed in this study to comprehensively analyze and evaluate the performances of thermal systems by coupling two models: EXCEM model and MFM. In the EXCEM-MFM model, MFM is used to provide analysis frameworks for exergy, cost, energy and mass four parameters, and EXCEM is used to calculate the flow values of these four parameters for MFM based on the provided framework. In this study, we used the tools and technologies of computer science and software engineering to materialize the model. Moreover, the feasibility and application potential of this proposed EXCEM-MFM model has been demonstrated by the example application of a comprehensive performance study of a typical High Temperature Gas Reactor (HTGR) cogeneration system by taking into account the thermodynamic and economic perspectives. (author)

  15. Enhanced Massive Visualization of Engines Performance

    International Nuclear Information System (INIS)

    Rostand, N D; Eglantine, H; Jerôme, L

    2012-01-01

    Today, we are witnessing an increasing complexity of transport in order to deal with requirements of safety, security, reliability and efficiency. Such transport is generally equipped with drive systems; it is nevertheless for engine manufacturers to overcome the performance requirements of energy efficiency throughout their operations. To this end, this article proposes a performance monitoring solution for a large fleet of engines in operation. It uses a pre-calibrated physical model developed by the engine manufacturer regarding the performance objectives as reference. The physical model is firstly decomposed into critical performance modules, and is secondly updated on current observations extracted at specific predefined operating conditions in order to derive residual errors status of each engine tested. Through a process of standardization of those contextual differences remaining, the solution offers a synthesis mapping to visualize the evolution of performance of each engine throughout its operations. This article describes the theoretical methodology of implementation mainly based on universal mathematical foundations, and vindicates the interests of its industrialization in the light of the proactive findings.

  16. Visual Middle-Out Modeling of Problem Spaces

    DEFF Research Database (Denmark)

    Valente, Andrea

    2009-01-01

    Modeling is a complex and central activity in many domains. Domain experts and designers usually work by drawing and create models from the middle-out; however, visual and middle-out style modeling is poorly supported by software tools. In order to define a new class of software-based modeling...... tools, we propose a scenario and identify some requirements. Those requirements are contrasted against features of existing tools from various application domains, and the results show general lack of support for custom visualization and incremental knowledge specification, poor handling of temporal...... information, and little generative capabilities.Satisfaction of the requirements proved difficult, and our first two prototypes did not perform well. A new and streamlined prototype is currently under development: it should enable some useful form of middle-out modeling. Application domains will range from...

  17. Slushy weightings for the optimal pilot model. [considering visual tracking task

    Science.gov (United States)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  18. High performance visual display for HENP detectors

    International Nuclear Information System (INIS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  19. Modelling individual difference in visual categorization.

    Science.gov (United States)

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  20. A model for visual memory encoding.

    Directory of Open Access Journals (Sweden)

    Rodolphe Nenert

    Full Text Available Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA. All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN. Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  1. A model for visual memory encoding.

    Science.gov (United States)

    Nenert, Rodolphe; Allendorfer, Jane B; Szaflarski, Jerzy P

    2014-01-01

    Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  2. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  3. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  4. Performance improvements from imagery:evidence that internal visual imagery is superior to external visual imagery for slalom performance

    Directory of Open Access Journals (Sweden)

    Nichola eCallow

    2013-10-01

    Full Text Available We report three experiments investigating the hypothesis that use of internal visual imagery (IVI would be superior to external visual imagery (EVI for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under both IVI and EVI conditions. Performance was again quickest in the IVI compared to EVI condition, with no differences in accuracy. Experiment 3 used the same group design as Experiment 1, but with participants performing a downhill ski-slalom task. Results revealed the IVI group to be significantly more accurate than the control group, with no significant differences in time taken to complete the task. These results support the beneficial effects of IVI for slalom-based tasks, and significantly advances our knowledge related to the differential effects of visual imagery perspectives on motor performance.

  5. Effects of lighting and task parameters on visual acuity and performance

    Energy Technology Data Exchange (ETDEWEB)

    Halonen, L.

    1993-12-31

    Lighting and task parameters and their effects on visual acuity and visual performance are dealt with. The parameters studied are target contrast, target size and subject`s age; and also adaptation luminance, luminance ratio between task and its surrounding and temporal change in luminances are studied. Experiments were carried out to examine the effects of luminance and light spectrum on visual acuity. Young normally sighted, older and low vision people participated in the measurements. In the young and older subject groups the visual acuity remained unchanged at contrasts 0.93 and 0.63 at the luminance range of 15-630 cd/m{sub 2}. The results show that at contrasts 0.03-0.93 young and older subjects` visual acuity remained unchanged in the luminance range of 105-630 cd/m{sub 2}. In the low vision group, the changes in luminances between 25-860 cd/m{sub 2} did not have significant effects on visual acuity measured at high contrast 0.93, at low contrast, slight individual changes were found. The colour temperature of the light sources was altered between 2900-9500 K in the experiment. In the groups of the older, young and low vision subjects the light spectrum did not have significant effects on visual acuity, except for two retinitis pigmentosa subjects. On the basis of the visual acuity experiments, a three dimensional visual acuity model (VA-HUT) has been developed. The model predicts visual acuity as a function of luminance, target contrast and observer age. On the basis of visual acuity experiments visual acuity reserve values have been calculated for different text sizes

  6. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    OpenAIRE

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in bot...

  7. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  8. Macular pigment and visual performance in glare: benefits for photostress recovery, disability glare, and visual discomfort.

    Science.gov (United States)

    Stringham, James M; Garcia, Paul V; Smith, Peter A; McLin, Leon N; Foutch, Brian K

    2011-09-22

    One theory of macular pigment's (MP) presence in the fovea is to improve visual performance in glare. This study sought to determine the effect of MP level on three aspects of visual performance in glare: photostress recovery, disability glare, and visual discomfort. Twenty-six subjects participated in the study. Spatial profiles of MP optical density were assessed with heterochromatic flicker photometry. Glare was delivered via high-bright-white LEDs. For the disability glare and photostress recovery portions of the experiment, the visual task consisted of correct identification of a 1° Gabor patch's orientation. Visual discomfort during the glare presentation was assessed with a visual discomfort rating scale. Pupil diameter was monitored with an infrared (IR) camera. MP level correlated significantly with all the outcome measures. Higher MP optical densities (MPODs) resulted in faster photostress recovery times (average P disability glare contrast thresholds (average P visual discomfort (P = 0.002). Smaller pupil diameter during glare presentation significantly correlated with higher visual discomfort ratings (P = 0.037). MP correlates with three aspects of visual performance in glare. Unlike previous studies of MP and glare, the present study used free-viewing conditions, in which effects of iris pigmentation and pupil size could be accounted for. The effects described, therefore, can be extended more confidently to real-world, practical visual performance benefits. Greater iris constriction resulted (paradoxically) in greater visual discomfort. This finding may be attributable to the neurobiologic mechanism that mediates the pain elicited by light.

  9. Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc

    2016-09-12

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.

  10. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  11. Modeling human comprehension of data visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Matzen, Laura E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Haass, Michael Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Divis, Kristin Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Andrew T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.

  12. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma

    Directory of Open Access Journals (Sweden)

    Viswa Gangeddula

    2017-08-01

    Full Text Available Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma.Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1, dynamic visual field condition (C2, and dynamic visual field condition with active driving (C3 using an interactive, desktop driving simulator. The number of correct responses (accuracy and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions.Results: Adding cognitive demand (C2 and C3 to the static visual field test (C1 adversely affected accuracy and response times, in both groups (p < 0.05. However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3 3 (2–6.50 vs. controls: 2 (0.50–2.50; p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6 vs. controls: 1 (0.50–2; p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02.Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.

  13. A physiologically based nonhomogeneous Poisson counter model of visual identification.

    Science.gov (United States)

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren

    2018-04-30

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise

  15. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    Science.gov (United States)

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  16. The influence of anaesthetists' experience on workload, performance and visual attention during simulated critical incidents.

    Science.gov (United States)

    Schulz, Christian M; Schneider, Erich; Kohlbecher, Stefan; Hapfelmeier, Alexander; Heuser, Fabian; Wagner, Klaus J; Kochs, Eberhard F; Schneider, Gerhard

    2014-10-01

    Development of accurate Situation Awareness (SA) depends on experience and may be impaired during excessive workload. In order to gain adequate SA for decision making and performance, anaesthetists need to distribute visual attention effectively. Therefore, we hypothesized that in more experienced anaesthetists performance is better and increase of physiological workload is less during critical incidents. Additionally, we investigated the relation between physiological workload indicators and distribution of visual attention. In fifteen anaesthetists, the increase of pupil size and heart rate was assessed in course of a simulated critical incident. Simulator log files were used for performance assessment. An eye-tracking device (EyeSeeCam) provided data about the anaesthetists' distribution of visual attention. Performance was assessed as time until definitive treatment. T tests and multivariate generalized linear models (MANOVA) were used for retrospective statistical analysis. Mean pupil diameter increase was 8.1% (SD ± 4.3) in the less experienced and 15.8% (±10.4) in the more experienced subjects (p = 0.191). Mean heart rate increase was 10.2% (±6.7) and 10.5% (±8.3, p = 0.956), respectively. Performance did not depend on experience. Pupil diameter and heart rate increases were associated with a shift of visual attention from monitoring towards manual tasks (not significant). For the first time, the following four variables were assessed simultaneously: physiological workload indicators, performance, experience, and distribution of visual attention between "monitoring" and "manual" tasks. However, we were unable to detect significant interactions between these variables. This experimental model could prove valuable in the investigation of gaining and maintaining SA in the operation theatre.

  17. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    Science.gov (United States)

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  18. VISUAL ART TEACHERS AND PERFORMANCE ASSESSMENT ...

    African Journals Online (AJOL)

    Charles

    Senior Secondary school visual art teachers constituted the sample of this ... and Performance Assessment Methods in Nigerian Senior Secondary Schools – Bello .... definition includes knowledge, skills, attitudes, metacognition and strategic ...

  19. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  20. Bio-inspired modeling and implementation of the ocelli visual system of flying insects.

    Science.gov (United States)

    Gremillion, Gregory; Humbert, J Sean; Krapp, Holger G

    2014-12-01

    Two visual sensing modalities in insects, the ocelli and compound eyes, provide signals used for flight stabilization and navigation. In this article, a generalized model of the ocellar visual system is developed for a 3-D visual simulation environment based on behavioral, anatomical, and electrophysiological data from several species. A linear measurement model is estimated from Monte Carlo simulation in a cluttered urban environment relating state changes of the vehicle to the outputs of the ocellar model. A fully analog-printed circuit board sensor based on this model is designed and fabricated. Open-loop characterization of the sensor to visual stimuli induced by self motion is performed. Closed-loop stabilizing feedback of the sensor in combination with optic flow sensors is implemented onboard a quadrotor micro-air vehicle and its impulse response is characterized.

  1. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  2. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  3. Performance of Single-Use FlexorVue vs Reusable BoaVision Ureteroscope for Visualization of Calices and Stone Extraction in an Artificial Kidney Model.

    Science.gov (United States)

    Schlager, Daniel; Hein, Simon; Obaid, Moaaz Abdulghani; Wilhelm, Konrad; Miernik, Arkadiusz; Schoenthaler, Martin

    2017-11-01

    To evaluate and compare Flexor ® Vue™, a semidisposable endoscopic deflection system with disposable ureteral sheath and reusable visualization source, and a nondisposable fiber optic ureteroscope in a standard in vitro setting. FlexorVue and a reusable fiber optic flexible ureteroscope were each tested in an artificial kidney model. The experimental setup included the visualization of colored pearls and the extraction of calculi with two different extraction devices (NCircle ® and NGage ® ). The procedures were performed by six experienced surgeons. Visualization time, access to calices, successful stone retraction, and time required were recorded. In addition, the surgeons' workload and subjective performance were determined according to the National Aeronautics and Space Administration-task load index (NASA-TLX). We referred to the Likert scale to assess maneuverability, handling, and image quality. Nearly all calices (99%) were correctly identified using the reusable scope, indicating full kidney access, whereas 74% of the calices were visualized using FlexorVue, of which 81% were correctly identified. Access to the lower poles of the kidney model was significantly less likely with the disposable device, and time to completion was significantly longer (755 s vs 153 s, p NASA-TLX scores were significantly higher using FlexorVue. The conventional reusable device also demonstrated superior maneuverability, handling, and image quality. FlexorVue offers a semidisposable deflecting endoscopic system allowing basic ureteroscopic and cystoscopic procedures. For its use as an addition or replacement for current reusable scopes, it requires substantial technical improvements.

  4. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A dual-trace model for visual sensory memory.

    Science.gov (United States)

    Cappiello, Marcus; Zhang, Weiwei

    2016-11-01

    Visual sensory memory refers to a transient memory lingering briefly after the stimulus offset. Although previous literature suggests that visual sensory memory is supported by a fine-grained trace for continuous representation and a coarse-grained trace of categorical information, simultaneous separation and assessment of these traces can be difficult without a quantitative model. The present study used a continuous estimation procedure to test a novel mathematical model of the dual-trace hypothesis of visual sensory memory according to which visual sensory memory could be modeled as a mixture of 2 von Mises (2VM) distributions differing in standard deviation. When visual sensory memory and working memory (WM) for colors were distinguished using different experimental manipulations in the first 3 experiments, the 2VM model outperformed Zhang and Luck (2008) standard mixture model (SM) representing a mixture of a single memory trace and random guesses, even though SM outperformed 2VM for WM. Experiment 4 generalized 2VM's advantages of fitting visual sensory memory data over SM from color to orientation. Furthermore, a single trace model and 4 other alternative models were ruled out, suggesting the necessity and sufficiency of dual traces for visual sensory memory. Together these results support the dual-trace model of visual sensory memory and provide a preliminary inquiry into the nature of information loss from visual sensory memory to WM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. VISUAL ART TEACHERS AND PERFORMANCE ASSESSMENT ...

    African Journals Online (AJOL)

    Charles

    qualitative research design; an aspect of descriptive survey research aiming at ... the competence and use of assessment strategies is determined by the type of ... Visual Art Teachers and Performance Assessment Methods in Nigerian Senior ...

  7. A Hyperbolic Ontology Visualization Tool for Model Application Programming Interface Documentation

    Science.gov (United States)

    Hyman, Cody

    2011-01-01

    Spacecraft modeling, a critically important portion in validating planned spacecraft activities, is currently carried out using a time consuming method of mission to mission model implementations and integration. A current project in early development, Integrated Spacecraft Analysis (ISCA), aims to remedy this hindrance by providing reusable architectures and reducing time spent integrating models with planning and sequencing tools. The principle objective of this internship was to develop a user interface for an experimental ontology-based structure visualization of navigation and attitude control system modeling software. To satisfy this, a number of tree and graph visualization tools were researched and a Java based hyperbolic graph viewer was selected for experimental adaptation. Early results show promise in the ability to organize and display large amounts of spacecraft model documentation efficiently and effectively through a web browser. This viewer serves as a conceptual implementation for future development but trials with both ISCA developers and end users should be performed to truly evaluate the effectiveness of continued development of such visualizations.

  8. Modeling the effect of selection history on pop-out visual search.

    Directory of Open Access Journals (Sweden)

    Yuan-Chi Tseng

    Full Text Available While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1 physical salience, (2 current goals and (3 selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP and the Distractor Preview Effect (DPE, two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task.

  9. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  10. Aerial somersault performance under three visual conditions.

    Science.gov (United States)

    Hondzinski, J M; Darling, W G

    2001-07-01

    Experiments were designed to examine the visual contributions to performance of back aerial double somersaults by collegiate acrobats. Somersaults were performed on a trampoline under three visual conditions: (a) NORMAL acuity; (b) REDUCED acuity (subjects wore special contacts that blocked light reflected onto the central retina); and (c) NO VISION. Videotaped skill performances were rated by two NCAA judges and digitized for kinematic analyses. Subjects' performance scores were similar in NORMAL and REDUCED conditions and lowest in the NO VISION condition. Control of body movement, indicated by time-to-contact, was most variable in the NO VISION condition. Profiles of angular head and neck velocity revealed that when subjects could see, they slowed their heads prior to touchdown in time to process optical flow information and prepare for landing. There was not always enough time to process vision associated with object identification and prepare for touchdown. It was concluded that collegiate acrobats do not need to identify objects for their best back aerial double somersault performance.

  11. Visual and flight performance recovery after PRK or LASIK in helicopter pilots.

    Science.gov (United States)

    Van de Pol, Corina; Greig, Joanna L; Estrada, Art; Bissette, Gina M; Bower, Kraig S

    2007-06-01

    Refractive surgery, specifically photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK), is becoming more accepted in the military environment. Determination of the impact on visual performance in the more demanding aviation environment was the impetus for this study. A prospective evaluation of 20 Black Hawk pilots pre-surgically and at 1 wk, 1 mo, and 6 mo postsurgery was conducted to assess both PRK and LASIK visual and flight performance outcomes on the return of aviators to duty. Of 20 pilots, 19 returned to flight status at 1 mo after surgery; 1 PRK subject was delayed due to corneal haze and subjective visual symptoms. Improvements were seen under simulator night and night vision goggle flight after LASIK; no significant changes in flight performance were measured in the aircraft. Results indicated a significantly faster recovery of all visual performance outcomes 1 wk after LASIK vs. PRK, with no difference between procedures at 1 and 6 mo. Low contrast acuity and contrast sensitivity only weakly correlated to flight performance in the early post-operative period. Overall flight performance assessed in this study after PRK and LASIK was stable or improved from baseline, indicating a resilience of performance despite measured decrements in visual performance, especially in PRK. More visually demanding flight tasks may be impacted by subtle changes in visual performance. Contrast tests are more sensitive to the effects of refractive surgical intervention and may prove to be a better indicator of visual recovery for return to flight status.

  12. Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble

    Science.gov (United States)

    Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin

    2017-04-01

    Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V

  13. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  14. Quantifying and Visualizing Uncertainties in Molecular Models

    OpenAIRE

    Rasheed, Muhibur; Clement, Nathan; Bhowmick, Abhishek; Bajaj, Chandrajit

    2015-01-01

    Computational molecular modeling and visualization has seen significant progress in recent years with sev- eral molecular modeling and visualization software systems in use today. Nevertheless the molecular biology community lacks techniques and tools for the rigorous analysis, quantification and visualization of the associated errors in molecular structure and its associated properties. This paper attempts at filling this vacuum with the introduction of a systematic statistical framework whe...

  15. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular within the neuroimaging community. Such methods attempt...... sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative influence...... be carefully selected, so that the model and its visualization enhance our ability to interpret the brain. The second part concerns interpretation of nonlinear models and procedures for extraction of ‘brain maps’ from nonlinear kernel models. We assess the performance of the sensitivity map as means...

  16. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular analysis tools within the neuroimaging community. Such methods...... neuroimaging data sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative...... be carefully selected, so that the model and its visualization enhance our ability to interpret brain function. The second part concerns interpretation of nonlinear models and procedures for extraction of ‘brain maps’ from nonlinear kernel models. We assess the performance of the sensitivity map as means...

  17. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  19. Visual Saliency Models for Text Detection in Real World.

    Directory of Open Access Journals (Sweden)

    Renwu Gao

    Full Text Available This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

  20. Interactive 4D Visualization of Sediment Transport Models

    Science.gov (United States)

    Butkiewicz, T.; Englert, C. M.

    2013-12-01

    Coastal sediment transport models simulate the effects that waves, currents, and tides have on near-shore bathymetry and features such as beaches and barrier islands. Understanding these dynamic processes is integral to the study of coastline stability, beach erosion, and environmental contamination. Furthermore, analyzing the results of these simulations is a critical task in the design, placement, and engineering of coastal structures such as seawalls, jetties, support pilings for wind turbines, etc. Despite the importance of these models, there is a lack of available visualization software that allows users to explore and perform analysis on these datasets in an intuitive and effective manner. Existing visualization interfaces for these datasets often present only one variable at a time, using two dimensional plan or cross-sectional views. These visual restrictions limit the ability to observe the contents in the proper overall context, both in spatial and multi-dimensional terms. To improve upon these limitations, we use 3D rendering and particle system based illustration techniques to show water column/flow data across all depths simultaneously. We can also encode multiple variables across different perceptual channels (color, texture, motion, etc.) to enrich surfaces with multi-dimensional information. Interactive tools are provided, which can be used to explore the dataset and find regions-of-interest for further investigation. Our visualization package provides an intuitive 4D (3D, time-varying) visualization of sediment transport model output. In addition, we are also integrating real world observations with the simulated data to support analysis of the impact from major sediment transport events. In particular, we have been focusing on the effects of Superstorm Sandy on the Redbird Artificial Reef Site, offshore of Delaware Bay. Based on our pre- and post-storm high-resolution sonar surveys, there has significant scour and bedform migration around the

  1. Statistical characteristics of aberrations of human eyes after small incision lenticule extraction surgery and analysis of visual performance with individual eye model.

    Science.gov (United States)

    Lou, Qiqi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; Fang, Hui

    2015-09-01

    Preoperative and postoperative wavefront aberrations of 73 myopic eyes with small incision lenticule extraction surgery are analyzed in this paper. Twenty-eight postoperative individual eye models are constructed to investigate the visual acuity (VA) of human eyes. Results show that in photopic condition, residual defocus, residual astigmatism, and higher-order aberrations are relatively small. 100% of eyes reach a VA of 0.8 or better, and 89.3% of eyes reach a VA of 1.0 or better. In scotopic condition, the residual defocus and the higher-order aberrations are, respectively, 1.9 and 8.5 times the amount of that in photopic condition, and the defocus becomes the main factor attenuating visual performance.

  2. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    Science.gov (United States)

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided.

  3. Perceptual learning improves visual performance in juvenile amblyopia.

    Science.gov (United States)

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  4. High performance visual display for HENP detectors

    CERN Document Server

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  5. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL

    Science.gov (United States)

    Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-01-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  7. UV-blocking spectacle lens protects against UV-induced decline of visual performance.

    Science.gov (United States)

    Liou, Jyh-Cheng; Teng, Mei-Ching; Tsai, Yun-Shan; Lin, En-Chieh; Chen, Bo-Yie

    2015-01-01

    Excessive exposure to sunlight may be a risk factor for ocular diseases and reduced visual performance. This study was designed to examine the ability of an ultraviolet (UV)-blocking spectacle lens to prevent visual acuity decline and ocular surface disorders in a mouse model of UVB-induced photokeratitis. Mice were divided into 4 groups (10 mice per group): (1) a blank control group (no exposure to UV radiation), (2) a UVB/no lens group (mice exposed to UVB rays, but without lens protection), (3) a UVB/UV400 group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [UV400 coating]), and (4) a UVB/photochromic group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [photochromic coating]). We investigated UVB-induced changes in visual acuity and in corneal smoothness, opacity, and lissamine green staining. We also evaluated the correlation between visual acuity decline and changes to the corneal surface parameters. Tissue sections were prepared and stained immunohistochemically to evaluate the structural integrity of the cornea and conjunctiva. In blank controls, the cornea remained undamaged, whereas in UVB-exposed mice, the corneal surface was disrupted; this disruption significantly correlated with a concomitant decline in visual acuity. Both the UVB/UV400 and UVB/photochromic groups had sharper visual acuity and a healthier corneal surface than the UVB/no lens group. Eyes in both protected groups also showed better corneal and conjunctival structural integrity than unprotected eyes. Furthermore, there were fewer apoptotic cells and less polymorphonuclear leukocyte infiltration in corneas protected by the spectacle lenses. The model established herein reliably determines the protective effect of UV-blocking ophthalmic biomaterials, because the in vivo protection against UV-induced ocular damage and visual acuity decline was easily defined.

  8. External and Internal Representations in the Acquisition and Use of Knowledge: Visualization Effects on Mental Model Construction

    Science.gov (United States)

    Schnotz, Wolfgang; Kurschner, Christian

    2008-01-01

    This article investigates whether different formats of visualizing information result in different mental models constructed in learning from pictures, whether the different mental models lead to different patterns of performance in subsequently presented tasks, and how these visualization effects can be modified by further external…

  9. A physiologically based nonhomogeneous Poisson counter model of visual identification

    DEFF Research Database (Denmark)

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus

    2018-01-01

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are ......A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects...... that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model...

  10. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  11. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  12. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  13. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  14. Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation

    Science.gov (United States)

    de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor

    2013-01-01

    Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873

  15. Effect of prematurity and low birth weight in visual abilities and school performance.

    Science.gov (United States)

    Perez-Roche, T; Altemir, I; Giménez, G; Prieto, E; González, I; Peña-Segura, J L; Castillo, O; Pueyo, V

    2016-12-01

    Prematurity and low birth weight are known risk factors for cognitive and developmental impairments, and school failure. Visual perceptual and visual motor skills seem to be among the most affected cognitive domains in these children. To assess the influence of prematurity and low birth weight in visual cognitive skills and school performance. We performed a prospective cohort study, which included 80 boys and girls in an age range from 5 to 13. Subjects were grouped by gestational age at birth (preterm, birth weight (small for gestational age (SGA), school performance in children. Figure-ground skill and visual motor integration were significantly decreased in the preterm birth group, compared with term control subjects (figure-ground: 45.7 vs 66.5, p=0.012; visual motor integration, TVAS: (9.9 vs 11.8, p=0.018), while outcomes of visual memory (29.0 vs 47.7, p=0.012), form constancy (33.3 vs 52.8, p=0.019), figure-ground (37.4 vs 65.6, p=0.001), and visual closure (43.7 vs 62.6 p=0.016) testing were lower in the SGA (vs AGA) group. Visual cognitive difficulties corresponded with worse performance in mathematics (r=0.414, p=0.004) and reading (r=0.343, p=0.018). Specific patterns of visual perceptual and visual motor deficits are displayed by children born preterm or SGA, which hinder mathematics and reading performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Developing a functioning visualization and analysis system for performance assessment

    International Nuclear Information System (INIS)

    Jones, M.L.

    1992-01-01

    Various commercial software packages and customized programs provide the ability to analyze and visualize the geology of Yucca Mountain. Starting with sparse, irregularly spaced data a series of gridded models has been developed representing the thermal/mechanical units within the mountain. Using computer aided design (CAD) software and scientific visualization software, the units can be manipulated, analyzed, and graphically displayed. The outputs are typically gridded terrain models, along with files of three-dimensional coordinates, distances, and other dimensional values. Contour maps, profiles, and shaded surfaces are the output for visualization

  17. Visualizing weighted networks: a performance comparison of adjacency matrices versus node-link diagrams

    Science.gov (United States)

    McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.

  18. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    Directory of Open Access Journals (Sweden)

    Anthony William Blanchfield

    2014-12-01

    Full Text Available The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effect of these non-conscious visual cues on effort and performance during physical tasks is however unknown. We report two experiments investigating the effect of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces revealed that individuals cycled for significantly longer (178 s, p = .04 when subliminally primed with happy faces. A 2 x 5 (condition x iso-time ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE during the time to exhaustion (TTE test with lower RPE when subjects were subliminally primed with happy faces (p = .04. In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer (399 s, p = .04 TTE in comparison to inaction words (p = .04. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = .03. These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise.

  19. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  1. Functionality and Performance Visualization of the Distributed High Quality Volume Renderer (HVR)

    KAUST Repository

    Shaheen, Sara

    2012-07-01

    Volume rendering systems are designed to provide means to enable scientists and a variety of experts to interactively explore volume data through 3D views of the volume. However, volume rendering techniques are computationally intensive tasks. Moreover, parallel distributed volume rendering systems and multi-threading architectures were suggested as natural solutions to provide an acceptable volume rendering performance for very large volume data sizes, such as Electron Microscopy data (EM). This in turn adds another level of complexity when developing and manipulating volume rendering systems. Given that distributed parallel volume rendering systems are among the most complex systems to develop, trace and debug, it is obvious that traditional debugging tools do not provide enough support. As a consequence, there is a great demand to provide tools that are able to facilitate the manipulation of such systems. This can be achieved by utilizing the power of compute graphics in designing visual representations that reflect how the system works and that visualize the current performance state of the system.The work presented is categorized within the field of software Visualization, where Visualization is used to serve visualizing and understanding various software. In this thesis, a number of visual representations that reflect a number of functionality and performance aspects of the distributed HVR, a high quality volume renderer system that uses various techniques to visualize large volume sizes interactively. This work is provided to visualize different stages of the parallel volume rendering pipeline of HVR. This is along with means of performance analysis through a number of flexible and dynamic visualizations that reflect the current state of the system and enables manipulation of them at runtime. Those visualization are aimed to facilitate debugging, understanding and analyzing the distributed HVR.

  2. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  3. Introducing memory and association mechanism into a biologically inspired visual model.

    Science.gov (United States)

    Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng

    2014-09-01

    A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.

  4. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  5. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  6. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    Science.gov (United States)

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  7. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    Science.gov (United States)

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  8. Modeling, analysis, and visualization of anisotropy

    CERN Document Server

    Özarslan, Evren; Hotz, Ingrid

    2017-01-01

    This book focuses on the modeling, processing and visualization of anisotropy, irrespective of the context in which it emerges, using state-of-the-art mathematical tools. As such, it differs substantially from conventional reference works, which are centered on a particular application. It covers the following topics: (i) the geometric structure of tensors, (ii) statistical methods for tensor field processing, (iii) challenges in mapping neural connectivity and structural mechanics, (iv) processing of uncertainty, and (v) visualizing higher-order representations. In addition to original research contributions, it provides insightful reviews. This multidisciplinary book is the sixth in a series that aims to foster scientific exchange between communities employing tensors and other higher-order representations of directionally dependent data. A significant number of the chapters were co-authored by the participants of the workshop titled Multidisciplinary Approaches to Multivalued Data: Modeling, Visualization,...

  9. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  10. Power profiles and short-term visual performance of soft contact lenses.

    Science.gov (United States)

    Papas, Eric; Dahms, Anne; Carnt, Nicole; Tahhan, Nina; Ehrmann, Klaus

    2009-04-01

    To investigate the manner in which contemporary soft contact lenses differ in the distribution of optical power within their optic zones and establish if these variations affect the vision of wearers or the prescribing procedure for back vertex power (BVP). By using a Visionix VC 2001 contact lens power analyzer, power profiles were measured across the optic zones of the following contemporary contact lenses ACUVUE 2, ACUVUE ADVANCE, O2OPTIX, NIGHT & DAY and PureVision. Single BVP measures were obtained using a Nikon projection lensometer. Visual performance was assessed in 28 masked subjects who wore each lens type in random order. Measurements taken were high and low contrast visual acuity in normal illumination (250 Cd/m), high contrast acuity in reduced illumination (5 Cd/m), subjective visual quality using a numerical rating scale, and visual satisfaction rating using a Likert scale. Marked differences in the distribution of optical power across the optic zone were evident among the lens types. No significant differences were found for any of the visual performance variables (p > 0.05, analysis of variance with repeated measures and Friedman test). Variations in power profile between contemporary soft lens types exist but do not, in general, result in measurable visual performance differences in the short term, nor do they substantially influence the BVP required for optimal correction.

  11. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  12. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  13. Visual determinants of reduced performance on the Stroop color-word test in normal aging individuals.

    Science.gov (United States)

    van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J

    2001-10-01

    It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.

  14. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  15. Visual Motor and Perceptual Task Performance in Astigmatic Students

    Directory of Open Access Journals (Sweden)

    Erin M. Harvey

    2017-01-01

    Full Text Available Purpose. To determine if spectacle corrected and uncorrected astigmats show reduced performance on visual motor and perceptual tasks. Methods. Third through 8th grade students were assigned to the low refractive error control group (astigmatism < 1.00 D, myopia < 0.75 D, hyperopia < 2.50 D, and anisometropia < 1.50 D or bilateral astigmatism group (right and left eye ≥ 1.00 D based on cycloplegic refraction. Students completed the Beery-Buktenica Developmental Test of Visual Motor Integration (VMI and Visual Perception (VMIp. Astigmats were randomly assigned to testing with/without correction and control group was tested uncorrected. Analyses compared VMI and VMIp scores for corrected and uncorrected astigmats to the control group. Results. The sample included 333 students (control group 170, astigmats tested with correction 75, and astigmats tested uncorrected 88. Mean VMI score in corrected astigmats did not differ from the control group (p=0.829. Uncorrected astigmats had lower VMI scores than the control group (p=0.038 and corrected astigmats (p=0.007. Mean VMIp scores for uncorrected (p=0.209 and corrected astigmats (p=0.124 did not differ from the control group. Uncorrected astigmats had lower mean scores than the corrected astigmats (p=0.003. Conclusions. Uncorrected astigmatism influences visual motor and perceptual task performance. Previously spectacle treated astigmats do not show developmental deficits on visual motor or perceptual tasks when tested with correction.

  16. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model.

    Science.gov (United States)

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses' placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF) was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient's eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  17. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  18. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  19. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  20. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  1. Simulator Evaluation of Drivers’ Performance on Rural Highways in relation to Drivers’ Visual Attention Demands

    Directory of Open Access Journals (Sweden)

    Yaqin Qin

    2015-01-01

    Full Text Available Aim of the study is to investigate, by means of a driving simulator experiment, drivers’ performance in terms of lateral position, speed, deceleration, steering angle, and breaking times on a divided two-lane rural highway in relation to drivers’ visual attention (VD. In the experiment, the virtual scene of twenty different geometric alignment sections without traffic and the VD testing were designed. Twenty-three experienced drivers with the calibration of attention capacity participated in a 30 km drive in an interactive fixed-base simulator. Each participant was required to drive with the controlled speed of 60 km/h along the central lane as repeating random number and was evaluated on VD and driving performances. Three different data analysis techniques were used: (a statistical tests and hypothesis test of curvature change rate (CCR of the geometric alignments, visual attention demands, and driving performance data, (b correlation analysis of VD, CCRs, and driving behaviors, and (c regression analysis of the VD and CCRs. Results have showed that the driving performance can be effectively influenced by the highway alignment and a prediction model built in this study can evaluate the drivers’ visual attention demands before the highway constructed. The interactions among VD, driving behavior, and CCRs were also found.

  2. Impact of low vision rehabilitation on functional vision performance of children with visual impairment.

    Science.gov (United States)

    Ganesh, Suma; Sethi, Sumita; Srivastav, Sonia; Chaudhary, Amrita; Arora, Priyanka

    2013-09-01

    To evaluate the impact of low vision rehabilitation on functional vision of children with visual impairment. The LV Prasad-Functional Vision Questionnaire, designed specifically to measure functional performance of visually impaired children of developing countries, was used to assess the level of difficulty in performing various tasks pre and post visual rehabilitation in children with documented visual impairment. Chi-square test was used to assess the impact of rehabilitation intervention on functional vision performance; a P visual acuity prior to the introduction of low vision devices (LVDs) was 0.90 ± 0.05 for distance and for near it was 0.61 ± 0.05. After the intervention, the acuities improved significantly for distance (0.2 ± 0.27; P visual rehabilitation was especially found in those activities related to their studying lifestyle like copying from the blackboard (P visual rehabilitation, especially with those activities which are related to their academic output. It is important for these children to have an early visual rehabilitation to decrease the impairment associated with these decreased visual output and to enhance their learning abilities.

  3. 3D Visualization of Trees Based on a Sphere-Board Model

    Directory of Open Access Journals (Sweden)

    Jiangfeng She

    2018-01-01

    Full Text Available Because of the smooth interaction of tree systems, the billboard and crossed-plane techniques of image-based rendering (IBR have been used for tree visualization for many years. However, both the billboard-based tree model (BBTM and the crossed-plane tree model (CPTM have several notable limitations; for example, they give an impression of slicing when viewed from the top side, and they produce an unimpressive stereoscopic effect and insufficient lighted effects. In this study, a sphere-board-based tree model (SBTM is proposed to eliminate these defects and to improve the final visual effects. Compared with the BBTM or CPTM, the proposed SBTM uses one or more sphere-like 3D geometric surfaces covered with a virtual texture, which can present more details about the foliage than can 2D planes, to represent the 3D outline of a tree crown. However, the profile edge presented by a continuous surface is overly smooth and regular, and when used to delineate the outline of a tree crown, it makes the tree appear very unrealistic. To overcome this shortcoming and achieve a more natural final visual effect of the tree model, an additional process is applied to the edge of the surface profile. In addition, the SBTM can better support lighted effects because of its cubic geometrical features. Interactive visualization effects for a single tree and a grove are presented in a case study of Sabina chinensis. The results show that the SBTM can achieve a better compromise between realism and performance than can the BBTM or CPTM.

  4. How visual search relates to visual diagnostic performance : a narrative systematic review of eye-tracking research in radiology

    NARCIS (Netherlands)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; ten Cate, Olle

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review

  5. Formation of 17-18 yrs age girl students’ visual performance by means of visual training at stage of adaptation to learning loads

    Directory of Open Access Journals (Sweden)

    Bondarenko S.V.

    2015-04-01

    Full Text Available Purpose: substantiation of health related training influence of basketball and volleyball elements on functional state of 1 st year students’ visual analyzers in period of adaptation to learning loads with expressed visual component. Material: in experiment 29 students of 17-18 year age without visual pathologies participated. Indicators of visual performance were determined by correction table of Tagayeva and processed by Weston methodic. Accommodative function was tested by method of mechanical proximetry. Results: the authors worked out and tested two programs of visual training. Influence of visual trainings on visual performance’s main components (quickness, quality, integral indicators was studied as well as eye’s accommodative function (by dynamic of position of the nearest point of clear vision. Conclusions: Application of visual trainings at physical education classes permits to improve indicators of visual analyzer’s performance as well as minimize negative influence of intensive learning loads on eye’ accommodative function.

  6. Visual modeling in an analysis of multidimensional data

    Science.gov (United States)

    Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.

    2018-01-01

    The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.

  7. Functional Imaging of Audio–Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    Science.gov (United States)

    Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.

    2017-01-01

    Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201

  8. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  9. Interactive Visual Analysis within Dynamic Ocean Models

    Science.gov (United States)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  10. Modern Notation of business models: а visual Trend

    OpenAIRE

    Tatiana, Gavrilova; Artem, Alsufyev; Anna-sophia, Yanson

    2014-01-01

    Information overf low and dynamic market changes encourage managers to search for a relevant and eloquent model to describe their business. This paper provides a new framework for visualizing business models, guided by wellshaped visualization based on a mind mapping technique. Due to the simplicity of perception, this approach has a positive impact on managers and employees’ understanding of companies’ business models and promotes a productive exchange of ideas and knowledge. The mindmapping...

  11. Towards computer-based perception by modeling visual perception : A probalistic theory

    NARCIS (Netherlands)

    Ciftcioglu, O.; Bittermann, M.; Sariyildiz, S.

    2006-01-01

    Studies on computer-based perception by vision modelling are described. The visual perception is mathematically modelled where the model receives and interprets visual data from the environment. The perception is defined in probabilistic terms so that it is in the same way quantified. Human visual

  12. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation

    KAUST Repository

    Abdellah, Marwan

    2017-02-15

    Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.

  13. Optical and visual performance of aspheric soft contact lenses.

    Science.gov (United States)

    Efron, Suzanne; Efron, Nathan; Morgan, Philip B

    2008-03-01

    This study was conducted to investigate whether aspheric design soft contact lenses reduce ocular aberrations and result in better visual acuity and subjective appreciation of clinical performance compared with spherical soft contact lenses. A unilateral, double-masked, randomized and controlled study was undertaken in which ocular aberrations and high and low contrast logMAR visual acuity were measured on myopic subjects who wore aspheric design (Biomedics 55 Evolution, CooperVision) and spherical design (Biomedics 55, CooperVision) soft contact lenses. Ten subjects who had about -2.00 D myopia wore -2.00 D lenses and 10 subjects who had about -5.00 D myopia wore -5.00 D lenses. Measurements were made under photopic and mesopic lighting conditions. Subjects were invited to grade comfort, vision in photopic and mesopic conditions, and overall impression with the two lens types on 100 unit visual analogue scales. There was no significant difference in high contrast or low contrast visual acuity between the two lens designs of either power under photopic or mesopic conditions. Both lens designs displayed lower levels of spherical aberration compared with the "no lens" condition under photopic and mesopic light levels (p designs. There were no statistically significant differences in subjective appreciation of clinical performance between lens designs or lens powers. At least with respect to the brand of lenses tested, the fitting of aspheric design soft contact lenses does not result in superior visual acuity, aberration control, or subjective appreciation compared with equivalent spherical design soft contact lenses.

  14. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    International Nuclear Information System (INIS)

    Schroeder, William J.

    2011-01-01

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  15. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  16. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  17. Anticipatory alpha phase influences visual working memory performance.

    Science.gov (United States)

    Zanto, Theodore P; Chadick, James Z; Gazzaley, Adam

    2014-01-15

    Alpha band (8-12 Hz) phase dynamics in the visual cortex are thought to reflect fluctuations in cortical excitability that influences perceptual processing. As such, visual stimuli are better detected when their onset is concurrent with specific phases of the alpha cycle. However, it is unclear whether alpha phase differentially influences cognitive performance at specific times relative to stimulus onset (i.e., is the influence of phase maximal before, at, or after stimulus onset?). To address this, participants performed a delayed-recognition, working memory (WM) task for visual motion direction during two separate visits. The first visit utilized functional magnetic resonance (fMRI) imaging to identify neural regions associated with task performance. Replicating previous studies, fMRI data showed engagement of visual cortical area V5, as well as a prefrontal cortical region, the inferior frontal junction (IFJ). During the second visit, transcranial magnetic stimulation (TMS) was applied separately to both the right IFJ and right V5 (with the vertex as a control region) while electroencephalography (EEG) was simultaneously recorded. During each trial, a single pulse of TMS (spTMS) was applied at one of six time points (-200, -100, -50, 0, 80, 160 ms) relative to the encoded stimulus onset. Results demonstrated a relationship between the phase of the posterior alpha signal prior to stimulus encoding and subsequent response times to the memory probe two seconds later. Specifically, spTMS to V5, and not the IFJ or vertex, yielded faster response times, indicating improved WM performance, when delivered during the peak, compared to the trough, of the alpha cycle, but only when spTMS was applied 100 ms prior to stimulus onset. These faster responses to the probe correlated with decreased early event related potential (ERP) amplitudes (i.e., P1) to the probe stimuli. Moreover, participants that were least affected by spTMS exhibited greater functional connectivity

  18. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  19. Model of rhythmic ball bouncing using a visually controlled neural oscillator.

    Science.gov (United States)

    Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro

    2017-10-01

    The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.

  20. Regional air-quality and acid-deposition modeling and the role for visualization

    International Nuclear Information System (INIS)

    Novak, J.H.; Dennis, R.L.

    1991-11-01

    The U.S. Environmental Protection Agency (EPA) uses air quality and deposition models to advance the scientific understanding of basic physical and chemical processes related to air pollution, and to assess the effectiveness of alternative emissions control strategies. The paper provides a brief technical description of several regional scale atmospheric models, their current use within EPA, and related data analysis issues. Spatial analysis is a key component in the evaluation and interpretation of the model predictions. Thus, the authors highlight several types of analysis enhancements focusing on those related to issues of spatial scale, user access to models and analysis tools, and consolidation of air quality modeling and graphical analysis capabilities. They discuss their initial experience with a Geographical Information System (GIS) pilot project that generated the initial concepts for the design of an integrated modeling and analysis environment. And finally, they present current plans to evolve this modeling/visualization approach to a distributed, heterogeneous computing environment which enables any research scientist or policy analyst to use high performance visualization techniques from his/her desktop

  1. Visual performance of pigeons following hippocampal lesions.

    Science.gov (United States)

    Bingman, V P; Hodos, W

    1992-11-15

    The effect of hippocampal lesions on performance in two psychophysical measures of spatial vision (acuity and size-difference threshold) was examined in 7 pigeons. No difference between the preoperative and postoperative thresholds of the experimental birds was found. The visual performance of pigeons in the psychophysical tasks failed to reveal a role of the hippocampal formation in vision. The results argue strongly that the behavioral deficits found in pigeons with hippocampal lesions when tested in a variety of memory-related spatial tasks is not based on a defect in spatial vision but impaired spatial cognition.

  2. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    Science.gov (United States)

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  3. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    Science.gov (United States)

    Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I

    2017-06-01

    The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.

  4. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  5. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  6. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  7. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  8. Spatial uncertainty model for visual features using a Kinect™ sensor.

    Science.gov (United States)

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  9. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    Directory of Open Access Journals (Sweden)

    Georgios Zoulinakis

    2017-01-01

    Full Text Available Purpose. To design an intraocular telescopic system (ITS for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient’s eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  10. Visualizations of Travel Time Performance Based on Vehicle Reidentification Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley Ernest [National Renewable Energy Lab, 15013 Denver West Parkway, Golden, CO 80401; Sharifi, Elham [Center for Advanced Transportation Technology, University of Maryland, College Park, Technology Ventures Building, Suite 2200, 5000 College Avenue, College Park, MD 20742; Day, Christopher M. [Joint Transportation Research Program, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906; Bullock, Darcy M. [Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906

    2017-01-01

    This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from various time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.

  11. Modeling Color Difference for Visualization Design.

    Science.gov (United States)

    Szafir, Danielle Albers

    2018-01-01

    Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.

  12. Measuring the performance of visual to auditory information conversion.

    Directory of Open Access Journals (Sweden)

    Shern Shiou Tan

    Full Text Available BACKGROUND: Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. METHODOLOGY: Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID and inter sound distance (ISD whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. CONCLUSIONS: With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  13. MODELLING SYNERGISTIC EYE MOVEMENTS IN THE VISUAL FIELD

    Directory of Open Access Journals (Sweden)

    BARITZ Mihaela

    2015-06-01

    Full Text Available Some theoretical and practical considerations about eye movements in visual field are presented in the first part of this paper. These movements are developed into human body to be synergistic and are allowed to obtain the visual perception in 3D space. The theoretical background of the eye movements’ analysis is founded on the establishment of movement equations of the eyeball, as they consider it a solid body with a fixed point. The exterior actions, the order and execution of the movements are ensured by the neural and muscular external system and thus the position, stability and movements of the eye can be quantified through the method of reverse kinematic. The purpose of these researches is the development of a simulation model of human binocular visual system, an acquisition methodology and an experimental setup for data processing and recording regarding the eye movements, presented in the second part of the paper. The modeling system of ocular movements aims to establish the binocular synergy and limits of visual field changes in condition of ocular motor dysfunctions. By biomechanical movements of eyeball is established a modeling strategy for different sort of processes parameters like convergence, fixation and eye lens accommodation to obtain responses from binocular balance. The results of modelling processes and the positions of eye ball and axis in visual field are presented in the final part of the paper.

  14. Effect of different illumination sources on reading and visual performance

    Directory of Open Access Journals (Sweden)

    Male Shiva Ram

    2018-01-01

    Conclusion: This study demonstrates the influence of illumination on reading rate; there were no significant differences between males and females under different illuminations, however, males preferred CFL and females preferred FLUO for faster reading and visual comfort. Interestingly, neither preferred LED or TUNG. Although energy-efficient, visual performance under LED is poor; it is uncomfortable for prolonged reading and causes early symptoms of fatigue.

  15. Visual art teachers and performance assessment methods in ...

    African Journals Online (AJOL)

    This paper examines the competencies of visual arts teachers in using performance assessment methods, and to ascertain the extent to which the knowledge, skills and experiences of teachers affect their competence in using assessment strategies in their classroom. The study employs a qualitative research design; ...

  16. Desempenho visual na correção de miopia com óculos e lentes de contato gelatinosas Visual performance in myopic correction with spectacles and soft contact lenses

    Directory of Open Access Journals (Sweden)

    Breno Barth

    2008-02-01

    with three different soft contact lenses [Acuvue® 2 (Vistacon J&J Vision Care Inc., USA, Biomedics® 55 (Ocular Science, USA, and Focus® 1-2 week (Ciba Vision Corporation, USA]. METHODS: An interventional prospective clinical trial studied a sample of 40 myopic patients (-1.00 to -4.50 sph, with or without astigmatism up to -0.75 cyl. Each patient had one eye randomized to visual performance evaluation. RESULTS: The Zywave aberrometer detected a over refraction and significant difference between Acuvue® 2 and Biomedics® 55 regarding spheric refractive components and spheric equivalent. Both soft contact lenses showed hypercorrection as compared to Focus® 1-2 week. Visual performance was not significantly different with spectacles and the three soft contact lenses in visual acuity and contrast sensitivity measurements. The wavefront analysis detected a significant difference in a third order aberration with and without soft contact lenses, with better visual performance with Acuvue® 2 and Biomedics® 55. CONCLUSION: In visual performance evaluation with spectacles and soft contact lenses the wavefront analysis was a more sensible measurement of visual function when compared to high contrast visual acuity and contrast sensitivity. The evaluation model of visual performance with wavefront analysis developed in this investigation may be useful for further similar studies.

  17. Oregon State University Softball: Dynamic Visual Acuity Training for Improving Performance

    OpenAIRE

    Madsen, Bruce; Blair, Kyle

    2017-01-01

    Sports vision training involves eye focusing and movement workouts that center on the visual tracking of objects. The purpose of sports vision training is to improve performance in various sports by improving visual responses and processing, such as by lowering reaction times. In 2015, the Athletic Eye Institute started a sports vision-training program study with the Oregon State University Softball Team in the hopes of increasing the dynamic visual skills of their players. There were two aim...

  18. Influence of visual feedback on human task performance in ITER remote handling

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, Gwendolijn Y.R., E-mail: g.schropp@heemskerk-innovative.nl [Utrecht University, Utrecht (Netherlands); Heemskerk Innovative Technology, Noordwijk (Netherlands); Heemskerk, Cock J.M. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann [Helmholtz Institute-Utrecht University, Utrecht (Netherlands); Elzendoorn, Ben S.Q. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands); Bult, David [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The performance of human operators in an ITER-like test facility for remote handling. Black-Right-Pointing-Pointer Different sources of visual feedback influence how fast one can complete a maintenance task. Black-Right-Pointing-Pointer Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  19. Influence of visual feedback on human task performance in ITER remote handling

    International Nuclear Information System (INIS)

    Schropp, Gwendolijn Y.R.; Heemskerk, Cock J.M.; Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann; Elzendoorn, Ben S.Q.; Bult, David

    2012-01-01

    Highlights: ► The performance of human operators in an ITER-like test facility for remote handling. ► Different sources of visual feedback influence how fast one can complete a maintenance task. ► Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  20. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  1. Visualizations and Mental Models - The Educational Implications of GEOWALL

    Science.gov (United States)

    Rapp, D.; Kendeou, P.

    2003-12-01

    Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an

  2. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation

    KAUST Repository

    Abdellah, Marwan; Bilgili, Ahmet; Eilemann, Stefan; Shillcock, Julian; Markram, Henry; Schü rmann, Felix

    2017-01-01

    to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain

  3. From Big Data to Big Displays High-Performance Visualization at Blue Brain

    KAUST Repository

    Eilemann, Stefan; Abdellah, Marwan; Antille, Nicolas; Bilgili, Ahmet; Chevtchenko, Grigory; Dumusc, Raphael; Favreau, Cyrille; Hernando, Juan; Nachbaur, Daniel; Podhajski, Pawel; Villafranca, Jafet; Schü rmann, Felix

    2017-01-01

    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic

  4. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort.

    Science.gov (United States)

    Avgan, Nesli; Sutherland, Heidi G; Spriggens, Lauren K; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M; Shum, David H K; Griffiths, Lyn R

    2017-03-17

    Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory ( p -value = 0.003) in a small cohort ( n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale-Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism ( p -value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF , and its anti-sense transcript BDNF-AS , in long-term visual memory performance.

  5. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort

    Directory of Open Access Journals (Sweden)

    Nesli Avgan

    2017-03-01

    Full Text Available Brain-derived neurotrophic factor (BDNF is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265 and long-term visual memory (p-value = 0.003 in a small cohort (n = 181 comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale—Fourth Edition subtests Visual Reproduction I and II (VR I and II. VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism (p-value = 0.006 that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF, and its anti-sense transcript BDNF-AS, in long-term visual memory performance.

  6. Modeling the shape hierarchy for visually guided grasping

    CSIR Research Space (South Africa)

    Rezai, O

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient...

  7. Macular pigment and its contribution to visual performance and experience

    Science.gov (United States)

    Loughman, James; Davison, Peter A.; Nolan, John M.; Akkali, Mukunda C.; Beatty, Stephen

    2010-01-01

    There is now a consensus, based on histological, biochemical and spectral absorption data, that the yellow colour observed at the macula lutea is a consequence of the selective accumulation of dietary xanthophylls in the central retina of the living eye. Scientific research continues to explore the function(s) of MP in the human retina, with two main hypotheses premised on its putative capacity to (1) protect the retina from (photo)-oxidative damage by means of its optical filtration and/or antioxidant properties, the so-called protective hypothesis and (2) influence the quality of visual performance by means of selective short wavelength light absorption prior to photoreceptor light capture, thereby attenuating the effects of chromatic aberration and light scatter, the so-called acuity and visibility hypotheses. The current epidemic of age-related macular degeneration has directed researchers to investigate the protective hypothesis of MP, while there has been a conspicuous lack of work designed to investigate the role of MP in visual performance. The aim of this review is to present and critically appraise the current literature germane to the contribution of MP, if any, to visual performance and experience.

  8. Macular pigment and its contribution to visual performance and experience

    Directory of Open Access Journals (Sweden)

    James Loughman

    2010-04-01

    Full Text Available There is now a consensus, based on histological, biochemical and spectral absorption data, that the yellow colour observed at the macula lutea is a consequence of the selective accumulation of dietary xanthophylls in the central retina of the living eye. Scientific research continues to explore the function(s of MP in the human retina, with two main hypotheses premised on its putative capacity to (1 protect the retina from (photo-oxidative damage by means of its optical filtration and/or antioxidant properties, the so-called protective hypothesis and (2 influence the quality of visual performance by means of selective short wavelength light absorption prior to photoreceptor light capture, thereby attenuating the effects of chromatic aberration and light scatter, the so-called acuity and visibility hypotheses. The current epidemic of age-related macular degeneration has directed researchers to investigate the protective hypothesis of MP, while there has been a conspicuous lack of work designed to investigate the role of MP in visual performance. The aim of this review is to present and critically appraise the current literature germane to the contribution of MP, if any, to visual performance and experience.

  9. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren

    modelled using a log-logistic function than an exponential function. A more challenging difference is that in the partial report task, there is more target-distractor confusion for auditory than visual stimuli. This failure of object-formation (prior to attentional object-selection) is not yet effectively......We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel......, and that there is a ‘race’ for selection and representation in visual short term memory (VSTM). In the basic TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of the letters (e.g., the red letters; partial report). Fitting the model...

  10. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    International Nuclear Information System (INIS)

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  11. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  12. GABAergic modulation of visual gamma and alpha oscillations and its consequences for working memory performance.

    Science.gov (United States)

    Lozano-Soldevilla, Diego; ter Huurne, Niels; Cools, Roshan; Jensen, Ole

    2014-12-15

    Impressive in vitro research in rodents and computational modeling has uncovered the core mechanisms responsible for generating neuronal oscillations. In particular, GABAergic interneurons play a crucial role for synchronizing neural populations. Do these mechanistic principles apply to human oscillations associated with function? To address this, we recorded ongoing brain activity using magnetoencephalography (MEG) in healthy human subjects participating in a double-blind pharmacological study receiving placebo, 0.5 mg and 1.5 mg of lorazepam (LZP; a benzodiazepine upregulating GABAergic conductance). Participants performed a demanding visuospatial working memory (WM) task. We found that occipital gamma power associated with WM recognition increased with LZP dosage. Importantly, the frequency of the gamma activity decreased with dosage, as predicted by models derived from the rat hippocampus. A regionally specific gamma increase correlated with the drug-related performance decrease. Despite the system-wide pharmacological intervention, gamma power drug modulations were specific to visual cortex: sensorimotor gamma power and frequency during button presses remained unaffected. In contrast, occipital alpha power modulations during the delay interval decreased parametrically with drug dosage, predicting performance impairment. Consistent with alpha oscillations reflecting functional inhibition, LZP affected alpha power strongly in early visual regions not required for the task demonstrating a regional specific occipital impairment. GABAergic interneurons are strongly implicated in the generation of gamma and alpha oscillations in human occipital cortex where drug-induced power modulations predicted WM performance. Our findings bring us an important step closer to linking neuronal dynamics to behavior by embracing established animal models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. An insect-inspired model for visual binding I: learning objects and their characteristics.

    Science.gov (United States)

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  14. The body voyage as visual representation and art performance

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal...... with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition...

  15. The role of visual skills and its impact on skill performance of cricket ...

    African Journals Online (AJOL)

    The aim of this study was to determine the role and the impact of a visual skills training programme on the skills performance of cricket players, and whether visual training programmes are beneficial to competitive sports performance. Highly skilled cricket players (n=13) who were actively participating at a provincial level of ...

  16. Differential learning and memory performance in OEF/OIF veterans for verbal and visual material.

    Science.gov (United States)

    Sozda, Christopher N; Muir, James J; Springer, Utaka S; Partovi, Diana; Cole, Michael A

    2014-05-01

    Memory complaints are particularly salient among veterans who experience combat-related mild traumatic brain injuries and/or trauma exposure, and represent a primary barrier to successful societal reintegration and everyday functioning. Anecdotally within clinical practice, verbal learning and memory performance frequently appears differentially reduced versus visual learning and memory scores. We sought to empirically investigate the robustness of a verbal versus visual learning and memory discrepancy and to explore potential mechanisms for a verbal/visual performance split. Participants consisted of 103 veterans with reported history of mild traumatic brain injuries returning home from U.S. military Operations Enduring Freedom and Iraqi Freedom referred for outpatient neuropsychological evaluation. Findings indicate that visual learning and memory abilities were largely intact while verbal learning and memory performance was significantly reduced in comparison, residing at approximately 1.1 SD below the mean for verbal learning and approximately 1.4 SD below the mean for verbal memory. This difference was not observed in verbal versus visual fluency performance, nor was it associated with estimated premorbid verbal abilities or traumatic brain injury history. In our sample, symptoms of depression, but not posttraumatic stress disorder, were significantly associated with reduced composite verbal learning and memory performance. Verbal learning and memory performance may benefit from targeted treatment of depressive symptomatology. Also, because visual learning and memory functions may remain intact, these might be emphasized when applying neurocognitive rehabilitation interventions to compensate for observed verbal learning and memory difficulties.

  17. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  18. Digital Technologies and performative pedagogies: Repositioning the visual

    Directory of Open Access Journals (Sweden)

    Kathryn Grushka

    2010-05-01

    Full Text Available Images are becoming a primary means of information presentation in the digitized global media and digital technologies have emancipated and democratized the image. This allows for the reproduction and manipulation of images on a scale never seen before and opens new possibilities for teachers schooled in critical visuality. This paper reports on an innovative pre-service teacher training course in which a cross-curricula cohort of secondary teachers employed visual performative competencies to produce a series of learning objects on a digital platform. The resulting intertextual narratives demonstrate that the manipulation of image and text offered by digital technologies create a powerful vehicle for investigating knowledge and understandings, evolving new meaning and awakening latent creativity in the use of images for meaning making. This research informs the New Literacies and multimodal fields of enquiry and argues that visuality is integral to any pedagogy that purports to be relevant to the contemporary learner. It argues that the visual has been significantly under-valued as a conduit for knowledge acquisition and meaning making in the digital environment and supports the claim that critical literacy, interactivity, experimentation and production are vital to attaining the tenets of transformative education (Buckingham, 2007; Walsh, 2007; Cope & Kalantzis, 2008.

  19. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  20. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  1. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.

    Science.gov (United States)

    Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P

    2013-03-01

    Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.

  2. Development of visual working memory and distractor resistance in relation to academic performance.

    Science.gov (United States)

    Tsubomi, Hiroyuki; Watanabe, Katsumi

    2017-02-01

    Visual working memory (VWM) enables active maintenance of goal-relevant visual information in a readily accessible state. The storage capacity of VWM is severely limited, often as few as 3 simple items. Thus, it is crucial to restrict distractor information from consuming VWM capacity. The current study investigated how VWM storage and distractor resistance develop during childhood in relation to academic performance in the classroom. Elementary school children (7- to 12-year-olds) and adults (total N=140) completed a VWM task with and without visual/verbal distractors during the retention period. The results showed that VWM performance with and without distractors developed at similar rates until reaching adult levels at 10years of age. In addition, higher VWM performance without distractors was associated with higher academic scores in literacy (reading and writing), mathematics, and science for the younger children (7- to 9-year-olds), whereas these academic scores for the older children (10- to 12-year-olds) were associated with VWM performance with visual distractors. Taken together, these results suggest that VWM storage and distractor resistance develop at a similar rate, whereas their contributions to academic performance differ with age. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Cognitive performance in visual memory and attention are influenced by many factors

    DEFF Research Database (Denmark)

    Wilms, Inge Linda; Nielsen, Simon

    Visual perception serves as the basis for much of the higher level cognitive processing as well as human activity in general. Here we present normative estimates for the following components of visual perception: the visual perceptual threshold, the visual short-term memory capacity and the visual...... perceptual encoding/decoding speed (processing speed) of visual short-term memory based on an assessment of 94 healthy subjects aged 60-75. The estimates are presented at total sample level as well as at gender level. The estimates were modelled from input from a whole-report assessment based on A Theory...... speed of Visual Short-term Memory (VTSM) but not the capacity of VSTM nor the visual threshold. The estimates will be useful for future studies into the effects of various types of intervention and training on cognition in general and visual attention in particular. (...

  4. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  5. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system.

    Science.gov (United States)

    Mender, Bedeho M W; Stringer, Simon M

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.

  8. Similarity, Not Complexity, Determines Visual Working Memory Performance

    Science.gov (United States)

    Jackson, Margaret C.; Linden, David E. J.; Roberts, Mark V.; Kriegeskorte, Nikolaus; Haenschel, Corinna

    2015-01-01

    A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased…

  9. An amodal shared resource model of language-mediated visual attention

    Directory of Open Access Journals (Sweden)

    Alastair Charles Smith

    2013-08-01

    Full Text Available Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behaviour and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language mediated eye gaze.

  10. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  11. Visual performance with sport-tinted contact lenses in natural sunlight.

    Science.gov (United States)

    Erickson, Graham B; Horn, Fraser C; Barney, Tyler; Pexton, Brett; Baird, Richard Y

    2009-05-01

    The use of tinted and clear contact lenses (CLs) in all aspects of life is becoming a more popular occurrence, particularly in athletic activities. This study broadens previous research regarding performance-tinted CLs and their effects on measures of visual performance. Thirty-three subjects (14 male, 19 female) were fitted with clear B&L Optima 38, 50% visible light transmission Amber and 36% visible light transmission Gray-Green Nike Maxsight CLs in an individualized randomized sequence. Subjects were dark-adapted with welding goggles before testing and in between subtests involving a Bailey-Lovie chart and the Haynes Distance Rock test. The sequence of testing was repeated for each lens modality. The Amber and Gray-Green lenses enabled subjects to recover vision faster in bright sunlight compared with clear lenses. Also, subjects were able to achieve better visual recognition in bright sunlight when compared with clear lenses. Additionally, the lenses allowed the subjects to alternate fixation between a bright and shaded target at a more rapid rate in bright sunlight as compared with clear lenses. Subjects preferred both the Amber and Gray-Green lenses over clear lenses in the bright and shadowed target conditions. The results of the current study show that Maxsight Amber and Gray-Green lenses provide better contrast discrimination in bright sunlight, better contrast discrimination when alternating between bright and shaded target conditions, better speed of visual recovery in bright sunlight, and better overall visual performance in bright and shaded target conditions compared with clear lenses.

  12. The Effect of Modeling and Visualization Resources on Student Understanding of Physical Hydrology

    Science.gov (United States)

    Marshall, Jilll A.; Castillo, Adam J.; Cardenas, M. Bayani

    2015-01-01

    We investigated the effect of modeling and visualization resources on upper-division, undergraduate and graduate students' performance on an open-ended assessment of their understanding of physical hydrology. The students were enrolled in one of five sections of a physical hydrology course. In two of the sections, students completed homework…

  13. Interactive Correlation Analysis and Visualization of Climate Data

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2016-09-21

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods for visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.

  14. Modeling and visual simulation of Microalgae photobioreactor

    Science.gov (United States)

    Zhao, Ming; Hou, Dapeng; Hu, Dawei

    Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.

  15. Effect of marihuana and alcohol on visual search performance

    Science.gov (United States)

    1976-10-01

    Two experiments were performed to determine the effects of alcohol and marihuana on visual scanning patterns in a simulated driving situation. In the first experiment 27 male heavy drinkers were divided into 3 groups of 9, defined by three blood alco...

  16. Guidelines for visualizing and annotating rule-based models.

    Science.gov (United States)

    Chylek, Lily A; Hu, Bin; Blinov, Michael L; Emonet, Thierry; Faeder, James R; Goldstein, Byron; Gutenkunst, Ryan N; Haugh, Jason M; Lipniacki, Tomasz; Posner, Richard G; Yang, Jin; Hlavacek, William S

    2011-10-01

    Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.

  17. Age and visual impairment decrease driving performance as measured on a closed-road circuit.

    Science.gov (United States)

    Wood, Joanne M

    2002-01-01

    In this study the effects of visual impairment and age on driving were investigated and related to visual function. Participants were 139 licensed drivers (young, middle-aged, and older participants with normal vision, and older participants with ocular disease). Driving performance was assessed during the daytime on a closed-road driving circuit. Visual performance was assessed using a vision testing battery. Age and visual impairment had a significant detrimental effect on recognition tasks (detection and recognition of signs and hazards), time to complete driving tasks (overall course time, reversing, and maneuvering), maneuvering ability, divided attention, and an overall driving performance index. All vision measures were significantly affected by group membership. A combination of motion sensitivity, useful field of view (UFOV), Pelli-Robson letter contrast sensitivity, and dynamic acuity could predict 50% of the variance in overall driving scores. These results indicate that older drivers with either normal vision or visual impairment had poorer driving performance compared with younger or middle-aged drivers with normal vision. The inclusion of tests such as motion sensitivity and the UFOV significantly improve the predictive power of vision tests for driving performance. Although such measures may not be practical for widespread screening, their application in selected cases should be considered.

  18. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  19. A Method to Train Marmosets in Visual Working Memory Task and Their Performance.

    Science.gov (United States)

    Nakamura, Katsuki; Koba, Reiko; Miwa, Miki; Yamaguchi, Chieko; Suzuki, Hiromi; Takemoto, Atsushi

    2018-01-01

    Learning and memory processes are similarly organized in humans and monkeys; therefore, monkeys can be ideal models for analyzing human aging processes and neurodegenerative diseases such as Alzheimer's disease. With the development of novel gene modification methods, common marmosets ( Callithrix jacchus ) have been suggested as an animal model for neurodegenerative diseases. Furthermore, the common marmoset's lifespan is relatively short, which makes it a practical animal model for aging. Working memory deficits are a prominent symptom of both dementia and aging, but no data are currently available for visual working memory in common marmosets. The delayed matching-to-sample task is a powerful tool for evaluating visual working memory in humans and monkeys; therefore, we developed a novel procedure for training common marmosets in such a task. Using visual discrimination and reversal tasks to direct the marmosets' attention to the physical properties of visual stimuli, we successfully trained 11 out of 13 marmosets in the initial stage of the delayed matching-to-sample task and provided the first available data on visual working memory in common marmosets. We found that the marmosets required many trials to initially learn the task (median: 1316 trials), but once the task was learned, the animals needed fewer trials to learn the task with novel stimuli (476 trials or fewer, with the exception of one marmoset). The marmosets could retain visual information for up to 16 s. Our novel training procedure could enable us to use the common marmoset as a useful non-human primate model for studying visual working memory deficits in neurodegenerative diseases and aging.

  20. The Effect of Covert Modeling on Communication Apprehension, Communication Confidence, and Performance.

    Science.gov (United States)

    Nimocks, Mittie J.; Bromley, Patricia L.; Parsons, Theron E.; Enright, Corinne S.; Gates, Elizabeth A.

    This study examined the effect of covert modeling on communication apprehension, public speaking anxiety, and communication competence. Students identified as highly communication apprehensive received covert modeling, a technique in which one first observes a model doing a behavior, then visualizes oneself performing the behavior and obtaining a…

  1. Visual search performance in infants associates with later ASD diagnosis

    Directory of Open Access Journals (Sweden)

    C.H.M. Cheung

    2018-01-01

    Full Text Available An enhanced ability to detect visual targets amongst distractors, known as visual search (VS, has often been documented in Autism Spectrum Disorders (ASD. Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD. Keywords: Visual search, Visual attention, ASD, ADHD, Infant, Familial risk

  2. An efficient visual saliency detection model based on Ripplet transform

    Indian Academy of Sciences (India)

    A Diana Andrushia

    human visual attention models is still not well investigated. ... Ripplet transform; visual saliency model; Receiver Operating Characteristics (ROC); .... proposed method has the same resolution as that of an input ... regions are obtained, which are independent of their sizes. ..... impact than those far away from the attention.

  3. Interaction of hypertension and age in visual selective attention performance.

    Science.gov (United States)

    Madden, D J; Blumenthal, J A

    1998-01-01

    Previous research suggests that some aspects of cognitive performance decline as a joint function of age and hypertension. In this experiment, 51 unmedicated individuals with mild essential hypertension and 48 normotensive individuals, 18-78 years of age, performed a visual search task. The estimated time required to identify a display character and shift attention between display positions increased with age. This attention shift time did not differ significantly between hypertensive and normotensive participants, but regression analyses indicated some mediation of the age effect by blood pressure. For individuals less than 60 years of age, the error rate was greater for hypertensive than for normotensive participants. Although the present design could detect effects of only moderate to large size, the results suggest that effects of hypertension may be more evident in a relatively general measure of performance (mean error rate) than in the speed of shifting visual attention.

  4. Comparative Visual Analysis of Structure-Performance Relations in Complex Bulk-Heterojunction Morphologies

    KAUST Repository

    Aboulhassan, A.

    2017-07-04

    The structure of Bulk-Heterojunction (BHJ) materials, the main component of organic photovoltaic solar cells, is very complex, and the relationship between structure and performance is still largely an open question. Overall, there is a wide spectrum of fabrication configurations resulting in different BHJ morphologies and correspondingly different performances. Current state-of-the-art methods for assessing the performance of BHJ morphologies are either based on global quantification of morphological features or simply on visual inspection of the morphology based on experimental imaging. This makes finding optimal BHJ structures very challenging. Moreover, finding the optimal fabrication parameters to get an optimal structure is still an open question. In this paper, we propose a visual analysis framework to help answer these questions through comparative visualization and parameter space exploration for local morphology features. With our approach, we enable scientists to explore multivariate correlations between local features and performance indicators of BHJ morphologies. Our framework is built on shape-based clustering of local cubical regions of the morphology that we call patches. This enables correlating the features of clusters with intuition-based performance indicators computed from geometrical and topological features of charge paths.

  5. Comparative Visual Analysis of Structure-Performance Relations in Complex Bulk-Heterojunction Morphologies

    KAUST Repository

    Aboulhassan, A.; Sicat, R.; Baum, D.; Wodo, O.; Hadwiger, Markus

    2017-01-01

    The structure of Bulk-Heterojunction (BHJ) materials, the main component of organic photovoltaic solar cells, is very complex, and the relationship between structure and performance is still largely an open question. Overall, there is a wide spectrum of fabrication configurations resulting in different BHJ morphologies and correspondingly different performances. Current state-of-the-art methods for assessing the performance of BHJ morphologies are either based on global quantification of morphological features or simply on visual inspection of the morphology based on experimental imaging. This makes finding optimal BHJ structures very challenging. Moreover, finding the optimal fabrication parameters to get an optimal structure is still an open question. In this paper, we propose a visual analysis framework to help answer these questions through comparative visualization and parameter space exploration for local morphology features. With our approach, we enable scientists to explore multivariate correlations between local features and performance indicators of BHJ morphologies. Our framework is built on shape-based clustering of local cubical regions of the morphology that we call patches. This enables correlating the features of clusters with intuition-based performance indicators computed from geometrical and topological features of charge paths.

  6. Adapting models of visual aesthetics for personalized content creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2012-01-01

    This paper introduces a search-based approach to personalized content generation with respect to visual aesthetics. The approach is based on a two-step adaptation procedure where (1) the evaluation function that characterizes the content is adjusted to match the visual aesthetics of users and (2......) the content itself is optimized based on the personalized evaluation function. To test the efficacy of the approach we design fitness functions based on universal properties of visual perception, inspired by psychological and neurobiological research. Using these visual properties we generate aesthetically...... spaceships according to their visual taste: the impact of the various visual properties is adjusted based on player preferences and new content is generated online based on the updated computational model of visual aesthetics of the player. Results are presented which show the potential of the approach...

  7. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    Science.gov (United States)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  8. Visualization of Distributed Data Structures for High Performance Fortran-Like Languages

    Directory of Open Access Journals (Sweden)

    Rainer Koppler

    1997-01-01

    Full Text Available This article motivates the usage of graphics and visualization for efficient utilization of High Performance Fortran's (HPF's data distribution facilities. It proposes a graphical toolkit consisting of exploratory and estimation tools which allow the programmer to navigate through complex distributions and to obtain graphical ratings with respect to load distribution and communication. The toolkit has been implemented in a mapping design and visualization tool which is coupled with a compilation system for the HPF predecessor Vienna Fortran. Since this language covers a superset of HPF's facilities, the tool may also be used for visualization of HPF data structures.

  9. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    Science.gov (United States)

    Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  10. A Neural Network Model of the Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Hansen, Lars Kai

    2009-01-01

    In this paper a neural network model of Visual Short-Term Memory (VSTM) is presented. The model links closely with Bundesen’s (1990) well-established mathematical theory of visual attention. We evaluate the model’s ability to fit experimental data from a classical whole and partial report study...

  11. Effect of pupil size on visual acuity in a laboratory model of pseudophakic monovision.

    Science.gov (United States)

    Kawamorita, Takushi; Uozato, Hiroshi; Handa, Tomoya; Ito, Misae; Shimizu, Kimiya

    2010-05-01

    To investigate the effect of pupil size on visual acuity in pseudophakic monovision. For the simulation, a modified Liou-Brennan model eye was used. The model eye was designed to include a centered optical system, corneal asphericity, an iris pupil, a Stiles-Crawford effect, an intraocular lens, and chromatic aberration. Calculation of the modulation transfer function (MTF) was performed with ZEMAX software. Visual acuity was estimated from the MTF and the retinal threshold curve. The sizes of the entrance pupil were 2.0, 2.5, 3.0, and 4.0 mm. Decreasing pupil diameter and increasing myopia progressively improved near visual acuity. For an entrance pupil size of 2.5 mm and a refractive error of -1.50 diopters, the logMAR value (Snellen; metric) in the non-dominant eye at 40 cm was 0.06 (20/23; 6/6.9). Knowledge of the patient's pupil diameter at near fixation can assist surgeons in determining the optimum degree of myopia for successful monovision.

  12. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  13. Modeling DNA structure and processes through animation and kinesthetic visualizations

    Science.gov (United States)

    Hager, Christine

    There have been many studies regarding the effectiveness of visual aids that go beyond that of static illustrations. Many of these have been concentrated on the effectiveness of visual aids such as animations and models or even non-traditional visual aid activities like role-playing activities. This study focuses on the effectiveness of three different types of visual aids: models, animation, and a role-playing activity. Students used a modeling kit made of Styrofoam balls and toothpicks to construct nucleotides and then bond nucleotides together to form DNA. Next, students created their own animation to depict the processes of DNA replication, transcription, and translation. Finally, students worked in teams to build proteins while acting out the process of translation. Students were given a pre- and post-test that measured their knowledge and comprehension of the four topics mentioned above. Results show that there was a significant gain in the post-test scores when compared to the pre-test scores. This indicates that the incorporated visual aids were effective methods for teaching DNA structure and processes.

  14. Visual bottle inspection performance in highly paced belt-conveyor systems.

    Science.gov (United States)

    Saito, M; Tanaka, T

    1977-12-01

    The relation between visual work performance and a few variables with immediate effects, i.e, lighting, work speed, work spell, etc. was studied. At the same time, some physiological and behavioral variations obtained in the study were also discussed. Through experiments and surveys, the optimum conditions were found for each variable. Work performance, however, is affected in such a subtle, interactive and dynamic manner that the working conditions are to be adjusted by taking into account not only these variables having immediate effects but also those indirectly relating ones meeting real needs in the practical working fields. Improvement of some physical working conditions, such as lighting, produces only a transitory increase of performance which is very unstable unless other determinants of performance are simultaneously managed in a proper manner. The same applied to arrangement of the optimum work speed and work spell for highly paced visual inspection, variations in rejection rate and in physiological functions interacting with individual and other determinants. In order to maximize understanding of the integrated and synthesized manner of determinants, it is emphasized that work performance must be pursued in a considerably comprehensive framework from a long-term perspective.

  15. Effectiveness of Interventions to Address Visual and Visual-Perceptual Impairments to Improve Occupational Performance in Adults With Traumatic Brain Injury: A Systematic Review.

    Science.gov (United States)

    Berger, Sue; Kaldenberg, Jennifer; Selmane, Romeissa; Carlo, Stephanie

    2016-01-01

    Visual and visual-perceptual impairments occur frequently with traumatic brain injury (TBI) and influence occupational performance. This systematic review examined the effectiveness of interventions within the scope of occupational therapy to improve occupational performance for adults with visual and visual-perceptual impairments as a result of TBI. Medline, PsycINFO, CINAHL, OTseeker, and the Cochrane Database of Systematic Reviews were searched, and 66 full text articles were reviewed. Sixteen articles were included in the review. Strong evidence supports the use of scanning, limited evidence supports the use of adaptive strategies, and mixed evidence supports the use of cognitive interventions to improve occupational performance for adults with TBI. Evidence related to vision therapy varies on the basis of the specific intervention implemented. Although the strength of the research varied, implications are discussed for practice, education, and research. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  16. From Big Data to Big Displays High-Performance Visualization at Blue Brain

    KAUST Repository

    Eilemann, Stefan

    2017-10-19

    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.

  17. Towards a visual modeling approach to designing microelectromechanical system transducers

    Science.gov (United States)

    Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim

    1999-12-01

    In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).

  18. The identification and modeling of visual cue usage in manual control task experiments

    Science.gov (United States)

    Sweet, Barbara Townsend

    variety of perspective scenes. The potential of using the model for visual cue identification was also investigated, with promising results. A third experiment was performed to compare perspective displays with more conventional display types.

  19. Mental practice with interactive 3D visual aids enhances surgical performance.

    Science.gov (United States)

    Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo

    2017-10-01

    Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.

  20. Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment

    Science.gov (United States)

    Frische, F.; Osterloh, J.-P.; Luedtke, A.

    2011-01-01

    This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.

  1. Adapting the Theory of Visual Attention (TVA) to model auditory attention

    DEFF Research Database (Denmark)

    Roberts, Katherine L.; Andersen, Tobias; Kyllingsbæk, Søren

    Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual...... Attention (TVA). We report that TVA provides a good fit to auditory data when the stimuli are closely matched to those used in visual studies. In the basic visual TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of letters (e...... the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend...

  2. Different developmental trajectories across feature types support a dynamic field model of visual working memory development.

    Science.gov (United States)

    Simmering, Vanessa R; Miller, Hilary E; Bohache, Kevin

    2015-05-01

    Research on visual working memory has focused on characterizing the nature of capacity limits as "slots" or "resources" based almost exclusively on adults' performance with little consideration for developmental change. Here we argue that understanding how visual working memory develops can shed new light onto the nature of representations. We present an alternative model, the Dynamic Field Theory (DFT), which can capture effects that have been previously attributed either to "slot" or "resource" explanations. The DFT includes a specific developmental mechanism to account for improvements in both resolution and capacity of visual working memory throughout childhood. Here we show how development in the DFT can account for different capacity estimates across feature types (i.e., color and shape). The current paper tests this account by comparing children's (3, 5, and 7 years of age) performance across different feature types. Results showed that capacity for colors increased faster over development than capacity for shapes. A second experiment confirmed this difference across feature types within subjects, but also showed that the difference can be attenuated by testing memory for less familiar colors. Model simulations demonstrate how developmental changes in connectivity within the model-purportedly arising through experience-can capture differences across feature types.

  3. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  4. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  5. A visual tracking method based on deep learning without online model updating

    Science.gov (United States)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  6. The body voyage as visual representation and art performance.

    Science.gov (United States)

    Olsén, Jan Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition by the French artist Christian Boltanski, which gives a somewhat different meaning to the idea of the body voyage.

  7. Immersive Data Comprehension: Visualizing Uncertainty in Measurable Models

    Directory of Open Access Journals (Sweden)

    Pere eBrunet

    2015-09-01

    Full Text Available Recent advances in 3D scanning technologies have opened new possibilities in a broad range of applications includingcultural heritage, medicine, civil engineering and urban planning. Virtual Reality systems can provide new tools toprofessionals that want to understand acquired 3D models. In this paper, we review the concept of data comprehension with an emphasis on visualization and inspection tools on immersive setups. We claim that in most application fields, data comprehension requires model measurements which in turn should be based on the explicit visualization of uncertainty. As 3D digital representations are not faithful, information on their fidelity at local level should be included in the model itself as uncertainty bounds. We propose the concept of Measurable 3D Models as digital models that explicitly encode local uncertainty bounds related to their quality. We claim that professionals and experts can strongly benefit from immersive interaction through new specific, fidelity-aware measurement tools which can facilitate 3D data comprehension. Since noise and processing errors are ubiquitous in acquired datasets, we discuss the estimation, representation and visualization of data uncertainty. We show that, based on typical user requirements in Cultural Heritage and other domains, application-oriented measuring tools in 3D models must consider uncertainty and local error bounds. We also discuss the requirements of immersive interaction tools for the comprehension of huge 3D and nD datasets acquired from real objects.

  8. Standalone visualization tool for three-dimensional DRAGON geometrical models

    International Nuclear Information System (INIS)

    Lukomski, A.; McIntee, B.; Moule, D.; Nichita, E.

    2008-01-01

    DRAGON is a neutron transport and depletion code able to solve one-, two- and three-dimensional problems. To date DRAGON provides two visualization modules, able to represent respectively two- and three-dimensional geometries. The two-dimensional visualization module generates a postscript file, while the three dimensional visualization module generates a MATLAB M-file with instructions for drawing the tracks in the DRAGON TRACKING data structure, which implicitly provide a representation of the geometry. The current work introduces a new, standalone, tool based on the open-source Visualization Toolkit (VTK) software package which allows the visualization of three-dimensional geometrical models by reading the DRAGON GEOMETRY data structure and generating an axonometric image which can be manipulated interactively by the user. (author)

  9. Visualizing the process of process modeling with PPMCharts

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.; La Rosa, M.; Soffer, P.

    2013-01-01

    In the quest for knowledge about how to make good process models, recent research focus is shifting from studying the quality of process models to studying the process of process modeling (often abbreviated as PPM) itself. This paper reports on our efforts to visualize this specific process in such

  10. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  11. Expressing Model Constraints Visually with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald

    2011-01-01

    ) for specifying constraints on UML models. We examine VMQL's usability by controlled experiments and its expressiveness by a representative sample. We conclude that VMQL is less expressive than OCL, although expressive enough for most of the constraints in the sample. In terms of usability, however, VMQL......OCL is the de facto standard language for expressing constraints and queries on UML models. However, OCL expressions are very difficult to create, understand, and maintain, even with the sophisticated tool support now available. In this paper, we propose to use the Visual Model Query Language (VMQL...

  12. Modelling the shape hierarchy for visually guided grasping

    Directory of Open Access Journals (Sweden)

    Omid eRezai

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP. The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e. distance from the observer to the object surface. We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. However (in contrast with superquadrics further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

  13. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    Science.gov (United States)

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate

  14. Helping students revise disruptive experientially supported ideas about thermodynamics: Computer visualizations and tactile models

    Science.gov (United States)

    Clark, Douglas; Jorde, Doris

    2004-01-01

    This study analyzes the impact of an integrated sensory model within a thermal equilibrium visualization. We hypothesized that this intervention would not only help students revise their disruptive experientially supported ideas about why objects feel hot or cold, but also increase their understanding of thermal equilibrium. The analysis synthesizes test data and interviews to measure the impact of this strategy. Results show that students in the experimental tactile group significantly outperform their control group counterparts on posttests and delayed posttests, not only on tactile explanations, but also on thermal equilibrium explanations. Interview transcripts of experimental and control group students corroborate these findings. Discussion addresses improving the tactile model as well as application of the strategy to other science topics. The discussion also considers possible incorporation of actual kinetic or thermal haptic feedback to reinforce the current audio and visual feedback of the visualization. This research builds on the conceptual change literature about the nature and role of students' experientially supported ideas as well as our understanding of curriculum and visualization design to support students in learning about thermodynamics, a science topic on which students perform poorly as shown by the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Study (TIMSS) studies.

  15. Markers of preparatory attention predict visual short-term memory performance.

    Science.gov (United States)

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  16. Determinants of gross motor skill performance in children with visual impairments.

    Science.gov (United States)

    Haibach, Pamela S; Wagner, Matthias O; Lieberman, Lauren J

    2014-10-01

    Children with visual impairments (CWVI) generally perform poorer in gross motor skills when compared with their sighted peers. This study examined the influence of age, sex, and severity of visual impairment upon locomotor and object control skills in CWVI. Participants included 100 CWVI from across the United States who completed the Test of Gross Motor Development II (TGMD-II). The TGMD-II consists of 12 gross motor skills including 6 object control skills (catching, kicking, striking, dribbling, throwing, and rolling) and 6 locomotor skills (running, sliding, galloping, leaping, jumping, and hopping). The full range of visual impairments according to United States Association for Blind Athletes (USABA; B3=20/200-20/599, legally blind; B2=20/600 and up, travel vision; B1=totally blind) were assessed. The B1 group performed significantly worse than the B2 (0.000 ≤ p ≤ 0.049) or B3 groups (0.000 ≤ p ≤ 0.005); however, there were no significant differences between B2 and B3 except for the run (p=0.006), catch (p=0.000), and throw (p=0.012). Age and sex did not play an important role in most of the skills, with the exception of boys outperforming girls striking (p=0.009), dribbling (p=0.013), and throwing (p=0.000), and older children outperforming younger children in dribbling (p=0.002). The significant impact of the severity of visual impairment is likely due to decreased experiences and opportunities for children with more severe visual impairments. In addition, it is likely that these reduced experiences explain the lack of age-related differences in the CWVI. The large disparities in performance between children who are blind and their partially sighted peers give direction for instruction and future research. In addition, there is a critical need for intentional and specific instruction on motor skills at a younger age to enable CWVI to develop their gross motor skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Plant Growth Modeling Using L-System Approach and Its Visualization

    Directory of Open Access Journals (Sweden)

    Atris Suyantohadi

    2011-05-01

    Full Text Available The visualizationof plant growth modeling using computer simulation has rarely been conducted with Lindenmayer System (L-System approach. L-System generally has been used as framework for improving and designing realistic modeling on plant growth. It is one kind of tools for representing plant growth based on grammar sintax and mathematic formulation. This research aimed to design modeling and visualizing plant growth structure generated using L-System. The environment on modeling design used three dimension graphic on standart OpenGL format. The visualization on system design has been developed by some of L-System grammar, and the output graphic on three dimension reflected on plant growth as a virtual plant growth system. Using some of samples on grammar L-System rules for describing of the charaterictics of plant growth, the visualization of structure on plant growth has been resulted and demonstrated.

  18. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    Science.gov (United States)

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  19. Visual Attention and Math Performance in Survivors of Childhood Acute Lymphoblastic Leukemia.

    Science.gov (United States)

    Richard, Annette E; Hodges, Elise K; Heinrich, Kimberley P

    2018-01-24

    Attentional and academic difficulties, particularly in math, are common in survivors of childhood acute lymphoblastic leukemia (ALL). Of cognitive deficits experienced by survivors of childhood ALL, attention deficits may be particularly responsive to intervention. However, it is unknown whether deficits in particular aspects of attention are associated with deficits in math skills. The current study investigated relationships between math calculation skills, performance on an objective measure of sustained attention, and parent- and teacher-reported attention difficulties. Twenty-four survivors of childhood ALL (Mage = 13.5 years, SD= 2.8 years) completed a computerized measure of sustained attention and response control and a written measure of math calculation skills in the context of a comprehensive clinical neuropsychological evaluation. Parent and teacher ratings of inattention and impulsivity were obtained. Visual response control and visual attention accounted for 26.4% of the variance observed among math performance scores after controlling for IQ (p < .05). Teacher-rated, but not parent-rated, inattention was significantly negatively correlated with math calculation scores. Consistency of responses to visual stimuli on a computerized measure of attention is a unique predictor of variance in math performance among survivors of childhood ALL. Objective testing of visual response control, rather than parent-rated attentional problems, may have clinical utility in identifying ALL survivors at risk for math difficulties. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  1. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  2. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  3. Molecular simulations and visualization: introduction and overview.

    Science.gov (United States)

    Hirst, Jonathan D; Glowacki, David R; Baaden, Marc

    2014-01-01

    Here we provide an introduction and overview of current progress in the field of molecular simulation and visualization, touching on the following topics: (1) virtual and augmented reality for immersive molecular simulations; (2) advanced visualization and visual analytic techniques; (3) new developments in high performance computing; and (4) applications and model building.

  4. Visual performance on detection tasks with double-targets of the same and different difficulty.

    Science.gov (United States)

    Chan, Alan H S; Courtney, Alan J; Ma, C W

    2002-10-20

    This paper reports a study of measurement of horizontal visual sensitivity limits for 16 subjects in single-target and double-targets detection tasks. Two phases of tests were conducted in the double-targets task; targets of the same difficulty were tested in phase one while targets of different difficulty were tested in phase two. The range of sensitivity for the double-targets test was found to be smaller than that for single-target in both the same and different target difficulty cases. The presence of another target was found to affect performance to a marked degree. Interference effect of the difficult target on detection of the easy one was greater than that of the easy one on the detection of the difficult one. Performance decrement was noted when correct percentage detection was plotted against eccentricity of target in both the single-target and double-targets tests. Nevertheless, the non-significant correlation found between the performance for the two tasks demonstrated that it was impossible to predict quantitatively ability for detection of double targets from the data for single targets. This indicated probable problems in generalizing data for single target visual lobes to those for multiple targets. Also lobe area values obtained from measurements using a single-target task cannot be applied in a mathematical model for situations with multiple occurrences of targets.

  5. Neuromorphic model of magnocellular and parvocellular visual paths: spatial resolution

    International Nuclear Information System (INIS)

    Aguirre, Rolando C; Felice, Carmelo J; Colombo, Elisa M

    2007-01-01

    Physiological studies of the human retina show the existence of at least two visual information processing channels, the magnocellular and the parvocellular ones. Both have different spatial, temporal and chromatic features. This paper focuses on the different spatial resolution of these two channels. We propose a neuromorphic model, so that they match the retina's physiology. Considering the Deutsch and Deutsch model (1992), we propose two configurations (one for each visual channel) of the connection between the retina's different cell layers. The responses of the proposed model have similar behaviour to those of the visual cells: each channel has an optimum response corresponding to a given stimulus size which decreases for larger or smaller stimuli. This size is bigger for the magno path than for the parvo path and, in the end, both channels produce a magnifying of the borders of a stimulus

  6. Modeling Human Aesthetic Perception of Visual Textures

    NARCIS (Netherlands)

    Thumfart, Stefan; Jacobs, Richard H. A. H.; Lughofer, Edwin; Eitzinger, Christian; Cornelissen, Frans W.; Groissboeck, Werner; Richter, Roland

    Texture is extensively used in areas such as product design and architecture to convey specific aesthetic information. Using the results of a psychological experiment, we model the relationship between computational texture features and aesthetic properties of visual textures. Contrary to previous

  7. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    Science.gov (United States)

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  9. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  10. High performance geospatial and climate data visualization using GeoJS

    Science.gov (United States)

    Chaudhary, A.; Beezley, J. D.

    2015-12-01

    GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring

  11. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    Science.gov (United States)

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  12. Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene.

    Science.gov (United States)

    van Rheede, Joram J; Wilson, Iain R; Qian, Rose I; Downes, Susan M; Kennard, Christopher; Hicks, Stephen L

    2015-07-01

    Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.

  13. Encoding model of temporal processing in human visual cortex.

    Science.gov (United States)

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  14. cellPACK: a virtual mesoscope to model and visualize structural systems biology.

    Science.gov (United States)

    Johnson, Graham T; Autin, Ludovic; Al-Alusi, Mostafa; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2015-01-01

    cellPACK assembles computational models of the biological mesoscale, an intermediate scale (10-100 nm) between molecular and cellular biology scales. cellPACK's modular architecture unites existing and novel packing algorithms to generate, visualize and analyze comprehensive three-dimensional models of complex biological environments that integrate data from multiple experimental systems biology and structural biology sources. cellPACK is available as open-source code, with tools for validation of models and with 'recipes' and models for five biological systems: blood plasma, cytoplasm, synaptic vesicles, HIV and a mycoplasma cell. We have applied cellPACK to model distributions of HIV envelope protein to test several hypotheses for consistency with experimental observations. Biologists, educators and outreach specialists can interact with cellPACK models, develop new recipes and perform packing experiments through scripting and graphical user interfaces at http://cellPACK.org/.

  15. An interference model of visual working memory.

    Science.gov (United States)

    Oberauer, Klaus; Lin, Hsuan-Yu

    2017-01-01

    The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  17. In silico modeling for tumor growth visualization.

    Science.gov (United States)

    Jeanquartier, Fleur; Jean-Quartier, Claire; Cemernek, David; Holzinger, Andreas

    2016-08-08

    Cancer is a complex disease. Fundamental cellular based studies as well as modeling provides insight into cancer biology and strategies to treatment of the disease. In silico models complement in vivo models. Research on tumor growth involves a plethora of models each emphasizing isolated aspects of benign and malignant neoplasms. Biologists and clinical scientists are often overwhelmed by the mathematical background knowledge necessary to grasp and to apply a model to their own research. We aim to provide a comprehensive and expandable simulation tool to visualizing tumor growth. This novel Web-based application offers the advantage of a user-friendly graphical interface with several manipulable input variables to correlate different aspects of tumor growth. By refining model parameters we highlight the significance of heterogeneous intercellular interactions on tumor progression. Within this paper we present the implementation of the Cellular Potts Model graphically presented through Cytoscape.js within a Web application. The tool is available under the MIT license at https://github.com/davcem/cpm-cytoscape and http://styx.cgv.tugraz.at:8080/cpm-cytoscape/ . In-silico methods overcome the lack of wet experimental possibilities and as dry method succeed in terms of reduction, refinement and replacement of animal experimentation, also known as the 3R principles. Our visualization approach to simulation allows for more flexible usage and easy extension to facilitate understanding and gain novel insight. We believe that biomedical research in general and research on tumor growth in particular will benefit from the systems biology perspective.

  18. The Perspective Structure of Visual Space

    Science.gov (United States)

    2015-01-01

    Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222

  19. Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance.

    Science.gov (United States)

    Liebe, Stefanie; Hoerzer, Gregor M; Logothetis, Nikos K; Rainer, Gregor

    2012-01-29

    Short-term memory requires communication between multiple brain regions that collectively mediate the encoding and maintenance of sensory information. It has been suggested that oscillatory synchronization underlies intercortical communication. Yet, whether and how distant cortical areas cooperate during visual memory remains elusive. We examined neural interactions between visual area V4 and the lateral prefrontal cortex using simultaneous local field potential (LFP) recordings and single-unit activity (SUA) in monkeys performing a visual short-term memory task. During the memory period, we observed enhanced between-area phase synchronization in theta frequencies (3-9 Hz) of LFPs together with elevated phase locking of SUA to theta oscillations across regions. In addition, we found that the strength of intercortical locking was predictive of the animals' behavioral performance. This suggests that theta-band synchronization coordinates action potential communication between V4 and prefrontal cortex that may contribute to the maintenance of visual short-term memories.

  20. Mapping Disciplinary Values and Rhetorical Concerns through Language: Writing Instruction in the Performing and Visual Arts

    Science.gov (United States)

    Cox, Anicca

    2015-01-01

    Via interview data focused on instructor practices and values, this study sought to describe some of what performing and visual arts instructors do at the university level to effectively teach disciplinary values through writing. The study's research goals explored how relationships to writing process in visual and performing arts support…

  1. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    Science.gov (United States)

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  2. Visualizing request-flow comparison to aid performance diagnosis in distributed systems.

    Science.gov (United States)

    Sambasivan, Raja R; Shafer, Ilari; Mazurek, Michelle L; Ganger, Gregory R

    2013-12-01

    Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance degrades, the problem might be in any of the system's many components or could be a result of poor interactions among them. Recent research efforts have created tools that automatically localize the problem to a small number of potential culprits, but research is needed to understand what visualization techniques work best for helping distributed systems developers understand and explore their results. This paper compares the relative merits of three well-known visualization approaches (side-by-side, diff, and animation) in the context of presenting the results of one proven automated localization technique called request-flow comparison. Via a 26-person user study, which included real distributed systems developers, we identify the unique benefits that each approach provides for different problem types and usage modes.

  3. Enhancing performance expectancies through visual illusions facilitates motor learning in children.

    Science.gov (United States)

    Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca

    2017-10-01

    In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Visualization Design Environment

    Energy Technology Data Exchange (ETDEWEB)

    Pomplun, A.R.; Templet, G.J.; Jortner, J.N.; Friesen, J.A.; Schwegel, J.; Hughes, K.R.

    1999-02-01

    Improvements in the performance and capabilities of computer software and hardware system, combined with advances in Internet technologies, have spurred innovative developments in the area of modeling, simulation and visualization. These developments combine to make it possible to create an environment where engineers can design, prototype, analyze, and visualize components in virtual space, saving the time and expenses incurred during numerous design and prototyping iterations. The Visualization Design Centers located at Sandia National Laboratories are facilities built specifically to promote the ''design by team'' concept. This report focuses on designing, developing and deploying this environment by detailing the design of the facility, software infrastructure and hardware systems that comprise this new visualization design environment and describes case studies that document successful application of this environment.

  5. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  6. Visual Modelling of Data Warehousing Flows with UML Profiles

    Science.gov (United States)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  7. Discrete-Slots Models of Visual Working-Memory Response Times

    Science.gov (United States)

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  8. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    Science.gov (United States)

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  9. Improved custom statistics visualization for CA Performance Center data

    CERN Document Server

    Talevi, Iacopo

    2017-01-01

    The main goal of my project is to understand and experiment the possibilities that CA Performance Center (CA PC) offers for creating custom applications to display stored information through interesting visual means, such as maps. In particular, I have re-written some of the network statistics web pages in order to fetch data from new statistics modules in CA PC, which has its own API, and stop using the RRD data.

  10. Software engineering methods for the visualization in the modeling of radiation imaging system

    International Nuclear Information System (INIS)

    Tang Jie; Zhang Li; Chen Zhiqiang; Zhao Ziran; XiaoYongshun

    2003-01-01

    This thesis has accomplished the research in visualization in the modeling of radiation imaging system, and a visualize software was developed using OpenGL and Visual C++ tools. It can load any model files, which are made by the user for every component of the radiation image system, and easily manages the module dynamic link library (DLL) designed by the user for possible movements of those components

  11. A cognitive model for visual attention and its application

    NARCIS (Netherlands)

    Bosse, T.; Maanen, P.P. van; Treur, J.

    2007-01-01

    In this paper a cognitive model for visual attention is introduced. The cognitive model is part of the design of a software agent that supports a naval warfare officer in its task to compile a tactical picture of the situation in the field. An executable formal specification of the cognitive model

  12. Modeling and Visualization of Human Activities for Multicamera Networks

    Directory of Open Access Journals (Sweden)

    Aswin C. Sankaranarayanan

    2009-01-01

    Full Text Available Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.

  13. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  14. Performance Measurement and Accommodation: Students with Visual Impairments on Pennsylvania's Alternate Assessment

    Science.gov (United States)

    Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J.

    2012-01-01

    Introduction: This study investigated the use of accommodations and the performance of students with visual impairments and severe cognitive disabilities on the Pennsylvania Alternate System of Assessment (PASA)yCoan alternate performance-based assessment. Methods: Differences in test scores on the most basic level (level A) of the PASA of 286…

  15. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  16. COMPARISON OF USER PERFORMANCE WITH INTERACTIVE AND STATIC 3D VISUALIZATION – PILOT STUDY

    Directory of Open Access Journals (Sweden)

    L. Herman

    2016-06-01

    Full Text Available Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  17. A Dynamic Systems Theory Model of Visual Perception Development

    Science.gov (United States)

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  18. Task-specific visual cues for improving process model understanding

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2016-01-01

    Context Business process models support various stakeholders in managing business processes and designing process-aware information systems. In order to make effective use of these models, they have to be readily understandable. Objective Prior research has emphasized the potential of visual cues to

  19. Consumer Control Points: Creating a Visual Food Safety Education Model for Consumers.

    Science.gov (United States)

    Schiffman, Carole B.

    Consumer education has always been a primary consideration in the prevention of food-borne illness. Using nutrition education and the new food guide as a model, this paper develops suggestions for a framework of microbiological food safety principles and a compatible visual model for communicating key concepts. Historically, visual food guides in…

  20. Bridging the gap between physiology and behavior: evidence from the sSoTS model of human visual attention.

    Science.gov (United States)

    Mavritsaki, Eirini; Heinke, Dietmar; Allen, Harriet; Deco, Gustavo; Humphreys, Glyn W

    2011-01-01

    We present the case for a role of biologically plausible neural network modeling in bridging the gap between physiology and behavior. We argue that spiking-level networks can allow "vertical" translation between physiological properties of neural systems and emergent "whole-system" performance-enabling psychological results to be simulated from implemented networks and also inferences to be made from simulations concerning processing at a neural level. These models also emphasize particular factors (e.g., the dynamics of performance in relation to real-time neuronal processing) that are not highlighted in other approaches and that can be tested empirically. We illustrate our argument from neural-level models that select stimuli by biased competition. We show that a model with biased competition dynamics can simulate data ranging from physiological studies of single-cell activity (Study 1) to whole-system behavior in human visual search (Study 2), while also capturing effects at an intermediate level, including performance breakdown after neural lesion (Study 3) and data from brain imaging (Study 4). We also show that, at each level of analysis, novel predictions can be derived from the biologically plausible parameters adopted, which we proceed to test (Study 5). We argue that, at least for studying the dynamics of visual attention, the approach productively links single-cell to psychological data.

  1. Visual and auditory digit-span performance in native and nonnative speakers

    NARCIS (Netherlands)

    Olsthoorn, N.M.; Andringa, S.; Hulstijn, J.H.

    2014-01-01

    We compared 121 native and 114 non-native speakers of Dutch (with 35 different first languages) on four digit-span tasks, varying modality (visual/auditory) and direction (forward/backward). An interaction was observed between nativeness and modality, such that, while natives performed better than

  2. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-11-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  3. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  4. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  5. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  6. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    Science.gov (United States)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  7. A study on dynamic model of steady-state visual evoked potentials.

    Science.gov (United States)

    Zhang, Shangen; Han, Xu; Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Gao, Xiaorong

    2018-04-04

    Significant progress has been made in the past two decades to considerably improve the performance of steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI). However, there are still some unsolved problems that may help us to improve BCI performance, one of which is that our understanding of the dynamic process of SSVEP is still superficial, especially for the transient-state response. This study introduced an antiphase stimulation method (antiphase: phase 0/π), which can simultaneously separate and extract SSVEP and event-related potential (ERP) signals from EEG, and eliminate the interference of ERP to SSVEP. Based on the SSVEP signals obtained by the antiphase stimulation method, the envelope of SSVEP was extracted by the Hilbert transform, and the dynamic model of SSVEP was quantitatively studied by mathematical modeling. The step response of a second-order linear system was used to fit the envelope of SSVEP, and its characteristics were represented by four parameters with physical and physiological meanings: one was amplitude related, one was latency related and two were frequency related. This study attempted to use pre-stimulation paradigms to modulate the dynamic model parameters, and quantitatively analyze the results by applying the dynamic model to further explore the pre-stimulation methods that had the potential to improve BCI performance. The results showed that the dynamic model had good fitting effect with SSVEP under three pre-stimulation paradigms. The test results revealed that the parameters of SSVEP dynamic models could be modulated by the pre-stimulation baseline luminance, and the gray baseline luminance pre-stimulation obtained the highest performance. This study proposed a dynamic model which was helpful to understand and utilize the transient characteristics of SSVEP. This study also found that pre-stimulation could be used to adjust the parameters of SSVEP model, and had the potential to improve the performance

  8. Simulating the role of visual selective attention during the development of perceptual completion

    OpenAIRE

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2012-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the percep...

  9. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  10. Unification of three linear models for the transient visual system

    NARCIS (Netherlands)

    Brinker, den A.C.

    1989-01-01

    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is

  11. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    Science.gov (United States)

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-06-29

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.

  12. The Theory of Visual Attention without the race: a new model of visual selection

    DEFF Research Database (Denmark)

    Andersen, Tobias; Kyllingsbæk, Søren

    2012-01-01

    constrained by a limited processing capacity or rate, which is distributed among target and distractor objects with distractor objects receiving a smaller proportion of resources due to attentional filtering. Encoding into a limited visual short-term memory is implemented as a race model. Given its major...

  13. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  14. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  15. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Bowman, S.M.

    2000-01-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system

  16. MENCARI MODEL EVALUASI DENGAN PENDEKATAN YANG SESUAI UNTUK PENDIDIKAN DESAIN KOMUNIKASI VISUAL

    Directory of Open Access Journals (Sweden)

    Maria N D K Indrayana

    2002-01-01

    Full Text Available Visual communication design in coming years seem getting better. It can be seen by the growth of commercial advertisment in the last three years and the pers freedom make the printing media and new electronic media exist. It would make the Visual communication design proffesion is became important. In the other side%2C there is a fact that many of advertisement have been canceled because of many public critical. Because of this%2C the designer as a creatif person has to be responsible. Because of this fenomena%2C education of visual communication design in Indonesia as an institution has to make the best quality designer. The process of study is become very important. Because of that reason%2C the evaluation model with suitable approach is needed for education of visual communication design. Abstract in Bahasa Indonesia : Prospek desain komunikasi visual pada tahun-tahun mendatang tampak lebih cerah dengan fenomena kenaikan belanja iklan tiga tahun terakhir dan kebebasan pers yang memicu kelahiran banyak media cetak dan media elektronik baru. Sehingga profesi ini akan semakin berperan penting. Di pihak lain ada kenyataan ditariknya sejumlah iklan yang tengah ditayangkan karena kritik yang diterimanya%2C sehingga desainer sebagai pengolah kreatif dianggap turut bertanggung jawab . Dengan berbagai latar belakang itulah institusi pendidikan Desain Komunikasi Visual di Indonesia sebagai wadah penggodokan calon desainer dituntut untuk melahirkan desainer dengan kualitas terbaik. Maka proses studi menjadi penting. Untuk itu diperlukan model evaluasi dengan pendekatan yang sekiranya sesuai untuk pendidikan desain komunikasi visual%2C dewasa ini. evaluation model%2C education of visual communication design.

  17. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements.

    Science.gov (United States)

    Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J Douglas

    2016-01-01

    In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.

  18. Visualization of logistic algorithm in Wilson model

    Science.gov (United States)

    Glushchenko, A. S.; Rodin, V. A.; Sinegubov, S. V.

    2018-05-01

    Economic order quantity (EOQ), defined by the Wilson's model, is widely used at different stages of production and distribution of different products. It is useful for making decisions in the management of inventories, providing a more efficient business operation and thus bringing more economic benefits. There is a large amount of reference material and extensive computer shells that help solving various logistics problems. However, the use of large computer environments is not always justified and requires special user training. A tense supply schedule in a logistics model is optimal, if, and only if, the planning horizon coincides with the beginning of the next possible delivery. For all other possible planning horizons, this plan is not optimal. It is significant that when the planning horizon changes, the plan changes immediately throughout the entire supply chain. In this paper, an algorithm and a program for visualizing models of the optimal value of supplies and their number, depending on the magnitude of the planned horizon, have been obtained. The program allows one to trace (visually and quickly) all main parameters of the optimal plan on the charts. The results of the paper represent a part of the authors’ research work in the field of optimization of protection and support services of ports in the Russian North.

  19. Combined discriminative global and generative local models for visual tracking

    Science.gov (United States)

    Zhao, Liujun; Zhao, Qingjie; Chen, Yanming; Lv, Peng

    2016-03-01

    It is a challenging task to develop an effective visual tracking algorithm due to factors such as pose variation, rotation, and so on. Combined discriminative global and generative local appearance models are proposed to address this problem. Specifically, we develop a compact global object representation by extracting the low-frequency coefficients of the color and texture of the object based on two-dimensional discrete cosine transform. Then, with the global appearance representation, we learn a discriminative metric classifier in an online fashion to differentiate the target object from its background, which is very important to robustly indicate the changes in appearance. Second, we develop a new generative local model that exploits the scale invariant feature transform and its spatial geometric information. To make use of the advantages of the global discriminative model and the generative local model, we incorporate them into Bayesian inference framework. In this framework, the complementary models help the tracker locate the target more accurately. Furthermore, we use different mechanisms to update global and local templates to capture appearance changes. The experimental results demonstrate that the proposed approach performs favorably against state-of-the-art methods in terms of accuracy.

  20. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina; Madsen, Nadia; Clausen, Jens

    2006-01-01

    An ontology is a classification model for a given domain.In information retrieval ontologies are used to perform broad searches.An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  1. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina Valentin; Madsen, Nadia Lyngaa; Clausen, Jens

    2004-01-01

    An ontology is a classification model for a given domain. In information retrieval ontologies are used to perform broad searches. An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  2. Computerized evaluation of deambulatory pattern before and after visual rehabilitation treatment performed with biofeedback in visually impaired patients suffering from macular degeneration

    Directory of Open Access Journals (Sweden)

    Fernanda Pacella

    2016-09-01

    Full Text Available Aims: The aim of this study was double: the primary endpoint was to evaluate the efficacy of visual rehabilitation of visually impaired patients with macular degeneration (AMD. The secondary endpoint was to assess the effect of rehabilitation treatment on the ambulatory pattern using a computerized evaluation of walking, focusing the attention on space-time parameters that are influenced in patients with visual impairment. Methods: 10 patients with AMD were enrolled, 6 males and 4 females, and examined 15 eyes, at Department of Sense Organs, Faculty of Medicine and Dentistry Sapienza University of Rome, Italy. Visual rehabilitation was carried out with the use of a microperimeter MP1 using the examination of biofeedback. Patients are asked to move their eyes in coordination with an audible feedback that alerts the patient when he is setting properly the fixation target previously selected. All patients were subjected to 10 sessions lasting 15 minutes each for each eye, 1 time per week. The best corrected visual acuity (BCVA was assessed by far with the ETDRS optotype IN LOG MAR, and by close to 25 cm by adding + 4 ball (addition to near to the BCVA. For each eye the PB ( print body on the distance of 25 cm was measured; It fixation stability for 30 seconds was examined by microperimeter. Gait Analysis was performed with system ELITE BTS SpA (Milan, Italy. Results: At the end of the rehabilitation treatment with biofeedback it was found a marked improvement in BCVA. The BCVA before the rehabilitation treatment was ETDRS 12 LETTERS = 0.86 logMAR. At the end of the visual rehabilitation 16 LETTERS = 0.78 logMAR. The near visual acuity presented a decrease of the printer body measurement (PB and a statistically significant improvement in the fixation stability. Analysis of the spatial and temporal parameters of gait cycle, aimed at assessing the global aspects of gait (speed, rhythm, symmetry, fluidity, dynamic balance showed no significant changes

  3. Changes in Drivers’ Visual Performance during the Collision Avoidance Process as a Function of Different Field of Views at Intersections

    Science.gov (United States)

    Yan, Xuedong; Zhang, Xinran; Zhang, Yuting; Li, Xiaomeng; Yang, Zhuo

    2016-01-01

    The intersection field of view (IFOV) indicates an extent that the visual information can be observed by drivers. It has been found that further enhancing IFOV can significantly improve emergent collision avoidance performance at intersections, such as faster brake reaction time, smaller deceleration rate, and lower traffic crash involvement risk. However, it is not known how IFOV affects drivers’ eye movements, visual attention and the relationship between visual searching and traffic safety. In this study, a driving simulation experiment was conducted to uncover the changes in drivers’ visual performance during the collision avoidance process as a function of different field of views at an intersection by using an eye tracking system. The experimental results showed that drivers’ ability in identifying the potential hazard in terms of visual searching was significantly affected by different IFOV conditions. As the IFOVs increased, drivers had longer gaze duration (GD) and more number of gazes (NG) in the intersection surrounding areas and paid more visual attention to capture critical visual information on the emerging conflict vehicle, thus leading to a better collision avoidance performance and a lower crash risk. It was also found that female drivers had a better visual performance and a lower crash rate than male drivers. From the perspective of drivers’ visual performance, the results strengthened the evidence that further increasing intersection sight distance standards should be encouraged for enhancing traffic safety. PMID:27716824

  4. The role of visual and spatial working memory in forming mental models derived from survey and route descriptions.

    Science.gov (United States)

    Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie

    2017-05-01

    This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.

  5. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  6. Declarative language design for interactive visualization.

    Science.gov (United States)

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  7. Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training.

  8. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  9. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  10. A Global System for Transportation Simulation and Visualization in Emergency Evacuation Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei [ORNL; Liu, Cheng [ORNL; Thomas, Neil [ORNL; Bhaduri, Budhendra L [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2015-01-01

    Simulation-based studies are frequently used for evacuation planning and decision making processes. Given the transportation systems complexity and data availability, most evacuation simulation models focus on certain geographic areas. With routine improvement of OpenStreetMap road networks and LandScanTM global population distribution data, we present WWEE, a uniform system for world-wide emergency evacuation simulations. WWEE uses unified data structure for simulation inputs. It also integrates a super-node trip distribution model as the default simulation parameter to improve the system computational performance. Two levels of visualization tools are implemented for evacuation performance analysis, including link-based macroscopic visualization and vehicle-based microscopic visualization. For left-hand and right-hand traffic patterns in different countries, the authors propose a mirror technique to experiment with both scenarios without significantly changing traffic simulation models. Ten cities in US, Europe, Middle East, and Asia are modeled for demonstration. With default traffic simulation models for fast and easy-to-use evacuation estimation and visualization, WWEE also retains the capability of interactive operation for users to adopt customized traffic simulation models. For the first time, WWEE provides a unified platform for global evacuation researchers to estimate and visualize their strategies performance of transportation systems under evacuation scenarios.

  11. Measuring and modeling salience with the theory of visual attention.

    Science.gov (United States)

    Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid

    2017-08-01

    For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.

  12. Substantial adverse association of visual and vascular comorbidities on visual disability in multiple sclerosis.

    Science.gov (United States)

    Marrie, Ruth Ann; Cutter, Gary; Tyry, Tuula

    2011-12-01

    Visual comorbidities are common in multiple sclerosis (MS) but the impact of visual comorbidities on visual disability is unknown. We assessed the impact of visual and vascular comorbidities on severity of visual disability in MS. In 2006, we queried participants of the North American Research Committee on Multiple Sclerosis (NARCOMS) about cataracts, glaucoma, uveitis, hypertension, hypercholesterolemia, heart disease, diabetes and peripheral vascular disease. We assessed visual disability using the Vision subscale of Performance Scales. Using Cox regression, we investigated whether visual or vascular comorbidities affected the time between MS symptom onset and the development of mild, moderate and severe visual disability. Of 8983 respondents, 1415 (15.9%) reported a visual comorbidity while 4745 (52.8%) reported a vascular comorbidity. The median (interquartile range) visual score was 1 (0-2). In a multivariable Cox model the risk of mild visual disability was higher among participants with vascular (hazard ratio [HR] 1.45; 95% confidence interval [CI]: 1.39-1.51) and visual comorbidities (HR 1.47; 95% CI: 1.37-1.59). Vascular and visual comorbidities were similarly associated with increased risks of moderate and severe visual disability. Visual and vascular comorbidities are associated with progression of visual disability in MS. Clinicians hearing reports of worsening visual symptoms in MS patients should consider visual comorbidities as contributing factors. Further study of these issues using objective, systematic neuro-ophthalmologic evaluations is warranted.

  13. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    Science.gov (United States)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  14. 3D visualization of ultra-fine ICON climate simulation data

    Science.gov (United States)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  15. A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces

    Directory of Open Access Journals (Sweden)

    Jorge Azorin-Lopez

    2017-06-01

    Full Text Available The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces.

  16. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    International Nuclear Information System (INIS)

    Helbren, Emma; Taylor, Stuart A.; Fanshawe, Thomas R.; Mallett, Susan; Phillips, Peter; Boone, Darren; Gale, Alastair; Altman, Douglas G.; Manning, David; Halligan, Steve

    2015-01-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  17. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Helbren, Emma; Taylor, Stuart A. [University College London, Centre for Medical Imaging, London (United Kingdom); Fanshawe, Thomas R.; Mallett, Susan [University of Oxford, Nuffield Department of Primary Care Health Sciences, Oxford (United Kingdom); Phillips, Peter [University of Cumbria, Health and Medical Sciences Group, Lancaster (United Kingdom); Boone, Darren [Colchester Hospital University NHS Foundation Trust and Anglia University, Colchester (United Kingdom); Gale, Alastair [Loughborough University, Applied Vision Research Centre, Loughborough (United Kingdom); Altman, Douglas G. [University of Oxford, Centre for Statistics in Medicine, Oxford (United Kingdom); Manning, David [Lancaster University, Lancaster Medical School, Faculty of Health and Medicine, Lancaster (United Kingdom); Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College Hospital, Gastrointestinal Radiology, University College London, Centre for Medical Imaging, Podium Level 2, London, NW1 2BU (United Kingdom)

    2015-06-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  18. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  19. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    Science.gov (United States)

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  20. Can theory be embedded in visual interventions to promote self-management? A proposed model and worked example.

    Science.gov (United States)

    Williams, B; Anderson, A S; Barton, K; McGhee, J

    2012-12-01

    Nurses are increasingly involved in a range of strategies to encourage patient behaviours that improve self-management. If nurses are to be involved in, or indeed lead, the development of such interventions then processes that enhance the likelihood that they will lead to evidence that is both robust and usable in practice are required. Although behavioural interventions have been predominantly based on written text or the spoken word increasing numbers are now drawing on visual media to communicate their message, despite only a growing evidence base to support it. The use of such media in health interventions is likely to increase due to technological advances enabling easier and cheaper production, and an increasing social preference for visual forms of communication. However, the development of such media is often highly pragmatic and developed intuitively rather than with theory and evidence informing their content and form. Such a process may be at best inefficient and at worst potentially harmful. This paper performs two functions. Firstly, it discusses and argues why visual based interventions may be a powerful media for behaviour change; and secondly, it proposes a model, developed from the MRC Framework for the Development and Evaluation of Complex Interventions, to guide the creation of theory informed visual interventions. It employs a case study of the development of an intervention to motivate involvement in a lifestyle intervention among people with increased cardiovascular risk. In doing this we argue for a step-wise model which includes: (1) the identification of a theoretical basis and associated concepts; (2) the development of visual narrative to establish structure; (3) the visual rendering of narrative and concepts; and (4) the assessment of interpretation and impact among the intended patient group. We go on to discuss the theoretical and methodological limitations of the model. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Subtle alterations in memory systems and normal visual attention in the GAERS model of absence epilepsy.

    Science.gov (United States)

    Marques-Carneiro, J E; Faure, J-B; Barbelivien, A; Nehlig, A; Cassel, J-C

    2016-03-01

    Even if considered benign, absence epilepsy may alter memory and attention, sometimes subtly. Very little is known on behavior and cognitive functions in the Genetic Absence Epilepsy Rats from Strasbourg (GAERS) model of absence epilepsy. We focused on different memory systems and sustained visual attention, using Non Epileptic Controls (NECs) and Wistars as controls. A battery of cognitive/behavioral tests was used. The functionality of reference, working, and procedural memory was assessed in the Morris water maze (MWM), 8-arm radial maze, T-maze and/or double-H maze. Sustained visual attention was evaluated in the 5-choice serial reaction time task. In the MWM, GAERS showed delayed learning and less efficient working memory. In the 8-arm radial maze and T-maze tests, working memory performance was normal in GAERS, although most GAERS preferred an egocentric strategy (based on proprioceptive/kinesthetic information) to solve the task, but could efficiently shift to an allocentric strategy (based on spatial cues) after protocol alteration. Procedural memory and visual attention were mostly unimpaired. Absence epilepsy has been associated with some learning problems in children. In GAERS, the differences in water maze performance (slower learning of the reference memory task and weak impairment of working memory) and in radial arm maze strategies suggest that cognitive alterations may be subtle, task-specific, and that normal performance can be a matter of strategy adaptation. Altogether, these results strengthen the "face validity" of the GAERS model: in humans with absence epilepsy, cognitive alterations are not easily detectable, which is compatible with subtle deficits. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Visual behaviour analysis and driver cognitive model

    Energy Technology Data Exchange (ETDEWEB)

    Baujon, J.; Basset, M.; Gissinger, G.L. [Mulhouse Univ., (France). MIPS/MIAM Lab.

    2001-07-01

    Recent studies on driver behaviour have shown that perception - mainly visual but also proprioceptive perception - plays a key role in the ''driver-vehicle-road'' system and so considerably affects the driver's decision making. Within the framework of the behaviour analysis and studies low-cost system (BASIL), this paper presents a correlative, qualitative and quantitative study, comparing the information given by visual perception and by the trajectory followed. This information will help to obtain a cognitive model of the Rasmussen type according to different driver classes. Many experiments in real driving situations have been carried out for different driver classes and for a given trajectory profile, using a test vehicle and innovative, specially designed, real-time tools, such as the vision system or the positioning module. (orig.)

  3. Symbolic modeling of human anatomy for visualization and simulation

    Science.gov (United States)

    Pommert, Andreas; Schubert, Rainer; Riemer, Martin; Schiemann, Thomas; Tiede, Ulf; Hoehne, Karl H.

    1994-09-01

    Visualization of human anatomy in a 3D atlas requires both spatial and more abstract symbolic knowledge. Within our 'intelligent volume' model which integrates these two levels, we developed and implemented a semantic network model for describing human anatomy. Concepts for structuring (abstraction levels, domains, views, generic and case-specific modeling, inheritance) are introduced. Model, tools for generation and exploration and applications in our 3D anatomical atlas are presented and discussed.

  4. Model visualization for evaluation of biocatalytic processes

    DEFF Research Database (Denmark)

    Law, HEM; Lewis, DJ; McRobbie, I

    2008-01-01

    Biocatalysis offers great potential as an additional, and in some cases as an alternative, synthetic tool for organic chemists, especially as a route to introduce chirality. However, the implementation of scalable biocatalytic processes nearly always requires the introduction of process and/or bi......,S-EDDS), a biodegradable chelant, and is characterised by the use of model visualization using `windows of operation"....

  5. Virtual hydrology observatory: an immersive visualization of hydrology modeling

    Science.gov (United States)

    Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas

    2009-02-01

    The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.

  6. Performance of an iPad Application to Detect Moderate and Advanced Visual Field Loss in Nepal.

    Science.gov (United States)

    Johnson, Chris A; Thapa, Suman; George Kong, Yu Xiang; Robin, Alan L

    2017-10-01

    To evaluate the accuracy and efficiency of Visual Fields Easy (VFE), a free iPad app, for performing suprathreshold perimetric screening. Prospective, cross-sectional validation study. We performed screening visual fields using a calibrated iPad 2 with the VFE application on 206 subjects (411 eyes): 210 normal (NL), 183 glaucoma (GL), and 18 diabetic retinopathy (DR) at Tilganga Institute of Ophthalmology, Kathmandu, Nepal. We correlated the results with a Humphrey Field Analyzer using 24-2 SITA Standard tests on 373 of these eyes (198 NL, 160 GL, 15 DR). The number of missed locations on the VFE correlated with mean deviation (MD, r = 0.79), pattern standard deviation (PSD, r = 0.60), and number of locations that were worse than the 95% confidence limits for total deviation (r = 0.51) and pattern deviation (r = 0.68) using SITA Standard. iPad suprathreshold perimetry was able to detect most visual field deficits with moderate (MD of -6 to -12 dB) and advanced (MD worse than -12 dB) loss, but had greater difficulty in detecting early (MD better than -6 dB) loss, primarily owing to an elevated false-positive response rate. The average time to perform the Visual Fields Easy test was 3 minutes, 18 seconds (standard deviation = 16.88 seconds). The Visual Fields Easy test procedure is a portable, fast, effective procedure for detecting moderate and advanced visual field loss. Improvements are currently underway to monitor eye and head tracking during testing, reduce testing time, improve performance, and eliminate the need to touch the video screen surface. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    Science.gov (United States)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  8. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  9. Sex differences in motor and cognitive abilities predicted from human evolutionary history with some implications for models of the visual system.

    Science.gov (United States)

    Sanders, Geoff

    2013-01-01

    This article expands the knowledge base available to sex researchers by reviewing recent evidence for sex differences in coincidence-anticipation timing (CAT), motor control with the hand and arm, and visual processing of stimuli in near and far space. In CAT, the differences are between sex and, therefore, typical of other widely reported sex differences. Men perform CAT tasks with greater accuracy and precision than women, who tend to underestimate time to arrival. Null findings arise because significant sex differences are found with easy but not with difficult tasks. The differences in motor control and visual processing are within sex, and they underlie reciprocal patterns of performance in women and men. Motor control is exerted better by women with the hand than the arm. In contrast, men showed the reverse pattern. Visual processing is performed better by women with stimuli within hand reach (near space) as opposed to beyond hand reach (far space); men showed the reverse pattern. The sex differences seen in each of these three abilities are consistent with the evolutionary selection of men for hunting-related skills and women for gathering-related skills. The implications of the sex differences in visual processing for two visual system models of human vision are discussed.

  10. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity

    OpenAIRE

    Künstler, E. C. S.; Finke, K.; Günther, A.; Klingner, C.; Witte, O.; Bublak, P.

    2017-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the ‘theory of visual attention’ (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual sh...

  11. Artistic Visualization of Trajectory Data Using Cloud Model

    Science.gov (United States)

    Wu, T.; Zhou, Y.; Zhang, L.

    2017-09-01

    Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  12. Model-base visual navigation of a mobile robot

    International Nuclear Information System (INIS)

    Roening, J.

    1992-08-01

    The thesis considers the problems of visual guidance of a mobile robot. A visual navigation system is formalized consisting of four basic components: world modelling, navigation sensing, navigation and action. According to this formalization an experimental system is designed and realized enabling real-world navigation experiments. A priori knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to the close the control loop between planned and actual movements. Two world models were developed. The first approach was a map-based model especially designed for low-level description of indoor environments. The other was a higher level and more symbolic representation of the surroundings utilizing the spatial graph concept. Two passive vision approaches were developed to extract navigation information. With passive three- camera stereovision a sparse depth map of the scene was produced. Another approach employed a fish-eye lens to map the entire scene of the surroundings without camera scanning. The local path planning of the system is supported by three-dimensional scene interpreter providing a partial understanding of scene contents. The interpreter consists of data-driven low-level stages and a model-driven high-level stage. Experiments were carried out in a simulator and test vehicle constructed in the laboratory. The test vehicle successfully navigated indoors

  13. Visual Performance of a Quadrifocal (Trifocal) Intraocular Lens Following Removal of the Crystalline Lens.

    Science.gov (United States)

    Kohnen, Thomas; Herzog, Michael; Hemkeppler, Eva; Schönbrunn, Sabrina; De Lorenzo, Nina; Petermann, Kerstin; Böhm, Myriam

    2017-12-01

    To evaluate visual performance after implantation of a quadrifocal intraocular lens (IOL). Setting: Department of Ophthalmology, Goethe University, Frankfurt, Germany. Twenty-seven patients (54 eyes) received bilateral implantation of the PanOptix IOL (AcrySof IQ PanOptixTM; Alcon Research, Fort Worth, Texas, USA) pre-enrollment. Exclusion criteria were previous ocular surgeries, corneal astigmatism of >1.5 diopter (D), ocular pathologies, or corneal abnormalities. Intervention or Observational Procedure(s): Postoperative examination at 3 months including manifest refraction; uncorrected visual acuity (UCVA) and distance-corrected visual acuity (DCVA) in 4 m, 80 cm, 60 cm, and 40 cm slit-lamp examination; defocus testing; contrast sensitivity (CS) under photopic and mesopic conditions; and a questionnaire on subjective quality of vision, optical phenomena, and spectacle independence was performed. At 3 months postoperatively, UCVA and DCVA in 4 m, 80 cm, 60 cm, and 40 cm (logMAR), defocus curves, CS, and quality-of-vision questionnaire results. Mean spherical equivalent was -0.04 ± 0.321 D 3 months postoperatively. Binocular UCVA at distance, intermediate (80 cm, 60 cm), and near was 0.00 ± 0.094 logMAR, 0.09 ± 0.107 logMAR, 0.00 ± 0.111 logMAR, and 0.01 ± 0.087 logMAR, respectively. Binocular defocus curve showed peaks with best visual acuity (VA) at 0.00 D (-0.07 logMAR) and -2.00 D (-0.02 logMAR). Visual performance of the PanOptix IOL showed good VA at all distances; particularly good intermediate VA (logMAR > 0.1), with best VA at 60 cm; and high patient satisfaction and spectacle independence 3 months postoperatively. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Time-Sharing-Based Synchronization and Performance Evaluation of Color-Independent Visual-MIMO Communication.

    Science.gov (United States)

    Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo

    2018-05-14

    In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.

  15. Applying the roofline performance model to the intel xeon phi knights landing processor

    OpenAIRE

    Doerfler, D; Deslippe, J; Williams, S; Oliker, L; Cook, B; Kurth, T; Lobet, M; Malas, T; Vay, JL; Vincenti, H

    2016-01-01

    � Springer International Publishing AG 2016. The Roofline Performance Model is a visually intuitive method used to bound the sustained peak floating-point performance of any given arithmetic kernel on any given processor architecture. In the Roofline, performance is nominally measured in floating-point operations per second as a function of arithmetic intensity (operations per byte of data). In this study we determine the Roofline for the Intel Knights Landing (KNL) processor, determining t...

  16. A model of visual, aesthetic communication focusing on web sites

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2002-01-01

    Theory books and method books within the field of web design mainly focus on the technical and functional aspects of the construction of web design. There is a lack of a model which weighs the analysis of the visual and aesthetic aspects against the the functional and technical aspects of web...... design. With a point of departure in Roman Jakobson's linguistic communication model, the reader is introduced to a model which covers the communication aspects, the visual aspects, the aesthetic aspects and the net specific aspects of the analysis of media products. The aesthetic aspects rank low...... in the eyes of the media producers even though the most outstanding media products often obtained their success due to aesthetic phenomena. The formal aesthetic function and the inexpressible aesthetic function have therefore been prioritised in the model in regard to the construction and analysis of media...

  17. Visualization in hydrological and atmospheric modeling and observation

    Science.gov (United States)

    Helbig, C.; Rink, K.; Kolditz, O.

    2013-12-01

    In recent years, visualization of geoscientific and climate data has become increasingly important due to challenges such as climate change, flood prediction or the development of water management schemes for arid and semi-arid regions. Models for simulations based on such data often have a large number of heterogeneous input data sets, ranging from remote sensing data and geometric information (such as GPS data) to sensor data from specific observations sites. Data integration using such information is not straightforward and a large number of potential problems may occur due to artifacts, inconsistencies between data sets or errors based on incorrectly calibrated or stained measurement devices. Algorithms to automatically detect various of such problems are often numerically expensive or difficult to parameterize. In contrast, combined visualization of various data sets is often a surprisingly efficient means for an expert to detect artifacts or inconsistencies as well as to discuss properties of the data. Therefore, the development of general visualization strategies for atmospheric or hydrological data will often support researchers during assessment and preprocessing of the data for model setup. When investigating specific phenomena, visualization is vital for assessing the progress of the ongoing simulation during runtime as well as evaluating the plausibility of the results. We propose a number of such strategies based on established visualization methods that - are applicable to a large range of different types of data sets, - are computationally inexpensive to allow application for time-dependent data - can be easily parameterized based on the specific focus of the research. Examples include the highlighting of certain aspects of complex data sets using, for example, an application-dependent parameterization of glyphs, iso-surfaces or streamlines. In addition, we employ basic rendering techniques allowing affine transformations, changes in opacity as well

  18. Behavioral model of visual perception and recognition

    Science.gov (United States)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and

  19. Using the Freiburg Acuity and Contrast Test to measure visual performance in USAF personnel after PRK.

    Science.gov (United States)

    Dennis, Richard J; Beer, Jeremy M A; Baldwin, J Bruce; Ivan, Douglas J; Lorusso, Frank J; Thompson, William T

    2004-07-01

    Photorefractive keratectomy (PRK) may be an alternative to spectacle and contact lens wear for United States Air Force (USAF) aircrew and may offer some distinct advantages in operational situations. However, any residual corneal haze or scar formation from PRK could exacerbate the disabling effects of a bright glare source on a complex visual task. The USAF recently completed a longitudinal clinical evaluation of the long-term effects of PRK on visual performance, including the experiment described herein. After baseline data were collected, 20 nonflying active duty USAF personnel underwent PRK. Visual performance was then measured at 6, 12, and 24 months after PRK. Visual acuity (VA) and contrast sensitivity (CS) data were collected by using the Freiburg Acuity and Contrast Test (FrACT), with the subject viewing half of the runs through a polycarbonate windscreen. Experimental runs were completed under 3 glare conditions: no glare source and with either a broadband or a green laser (532-nm) glare annulus (luminance approximately 6090 cd/m) surrounding the Landolt C stimulus. Systematic effects of PRK on VA relative to baseline were not identified. However, VA was almost 2 full Snellen lines worse with the laser glare source in place versus the broadband glare source. A significant drop-off was observed in CS performance after PRK under conditions of no glare and broadband glare; this was the case both with and without the windscreen. As with VA, laser glare disrupted CS performance significantly and more than broadband glare did. PRK does not appear to have affected VA, but the changes in CS might represent a true decline in visual performance. The greater disruptive effects from laser versus broadband glare may be a result of increased masking from coherent spatial noise (speckle) surrounding the laser stimulus.

  20. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    Science.gov (United States)

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  1. Cognitive ageing on latent constructs for visual processing capacity: A novel Structural Equation Modelling framework with causal assumptions based on A Theory of Visual Attention

    Directory of Open Access Journals (Sweden)

    Simon eNielsen

    2015-01-01

    Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  2. Using a visual plate waste study to monitor menu performance.

    Science.gov (United States)

    Connors, Priscilla L; Rozell, Sarah B

    2004-01-01

    Two visual plate waste studies were conducted in 1-week phases over a 1-year period in an acute care hospital. A total of 383 trays were evaluated in the first phase and 467 in the second. Food items were ranked for consumption from a low (1) to high (6) score, with a score of 4.0 set as the benchmark denoting a minimum level of acceptable consumption. In the first phase two entrees, four starches, all of the vegetables, sliced white bread, and skim milk scored below the benchmark. As a result six menu items were replaced and one was modified. In the second phase all entrees scored at or above 4.0, as did seven vegetables, and a dinner roll that replaced sliced white bread. Skim milk continued to score below the benchmark. A visual plate waste study assists in benchmarking performance, planning menu changes, and assessing effectiveness.

  3. Visual methodologies and participatory action research: Performing women's community-based health promotion in post-Katrina New Orleans.

    Science.gov (United States)

    Lykes, M Brinton; Scheib, Holly

    2016-01-01

    Recovery from disaster and displacement involves multiple challenges including accompanying survivors, documenting effects, and rethreading community. This paper demonstrates how African-American and Latina community health promoters and white university-based researchers engaged visual methodologies and participatory action research (photoPAR) as resources in cross-community praxis in the wake of Hurricane Katrina and the flooding of New Orleans. Visual techniques, including but not limited to photonarratives, facilitated the health promoters': (1) care for themselves and each other as survivors of and responders to the post-disaster context; (2) critical interrogation of New Orleans' entrenched pre- and post-Katrina structural racism as contributing to the racialised effects of and responses to Katrina; and (3) meaning-making and performances of women's community-based, cross-community health promotion within this post-disaster context. This feminist antiracist participatory action research project demonstrates how visual methodologies contributed to the co-researchers' cross-community self- and other caring, critical bifocality, and collaborative construction of a contextually and culturally responsive model for women's community-based health promotion post 'unnatural disaster'. Selected limitations as well as the potential for future cross-community antiracist feminist photoPAR in post-disaster contexts are discussed.

  4. Information visualization of the minority game

    Science.gov (United States)

    Jiang, W.; Herbert, R. D.; Webber, R.

    2008-02-01

    Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market.

  5. Information visualization of the minority game

    International Nuclear Information System (INIS)

    Jiang, W; Herbert, R D; Webber, R

    2008-01-01

    Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market

  6. Visual perception and medical imaging

    International Nuclear Information System (INIS)

    Jaffe, C.C.

    1985-01-01

    Medical imaging represents a particularly distinct discipline for image processing since it uniquely depends on the ''expert observer'' and yet models of the human visual system are totally inadequate at the complex level to allow satisfactory prediction of observer response to a given image modification. An illustration of the difficulties in assessing observer performance is shown by a series of optical illustrations which demonstrate that net cognitive behavior is not readily predictable. Although many of these phenomena are often considered as exceptional visual events, the setting of complex images makes it difficult to entirely exclude at least partial operation of these impairments during performance of the diagnostic medical imaging task

  7. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  8. The development of hand-centred visual representations in the primate brain: a computer modelling study using natural visual scenes.

    Directory of Open Access Journals (Sweden)

    Juan Manuel Galeazzi

    2015-12-01

    Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

  9. Robot Visual Tracking via Incremental Self-Updating of Appearance Model

    Directory of Open Access Journals (Sweden)

    Danpei Zhao

    2013-09-01

    Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.

  10. The impact of visual impairment on the ability to perform activities of daily living for persons with severe/profound intellectual disability.

    Science.gov (United States)

    Dijkhuizen, Annemarie; Hilgenkamp, Thessa I M; Krijnen, Wim P; van der Schans, Cees P; Waninge, Aly

    2016-01-01

    The ability to perform activities of daily living (ADL) as a component of participation is one of the factors that contribute to quality of life. The ability to perform ADL for persons experiencing severe/profound intellectual disability (ID) may be reduced due to their cognitive and physical capacities. However, until recently, the impact of the significantly prevalent visual impairments on the performance of activities of daily living has not yet been revealed within this group. The purpose of this prospective cross-sectional study was to investigate the impact of visual impairment on the performance of activities of daily living for persons with a severe/profound intellectual disability. The Barthel Index (BI) and Comfortable Walking Speed (CWS) were used to measure the ability of performing activities of daily living (ADL) in 240 persons with severe/profound ID and having Gross Motor Functioning Classification System (GMFCS) levels I, II or III; this included 120 persons with visual impairment. The impact of visual impairment on ADL was analyzed with linear regression. The results of the study demonstrated that visual impairment slightly affects the ability of performing activities of daily living (BI) for persons experiencing a severe/profound intellectual disability. GMFCS Levels II or III, profound ID level, and visual impairment each have the effect of lowering BI scores. GMFCS Levels II or III, and profound ID level each have the effect of increasing CWS scores, which indicates a lower walking speed. A main effect of visual impairment is present on CWS, but our results do show a substantive interaction effect between GMFCS level III and visual impairment on Comfortable Walking Speed in persons with a severe/profound intellectual disability. Visual impairment has a slight effect on ability to perform ADL in persons experiencing severe/profound ID. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Organizational strategy influence on visual memory performance after stroke: cortical/subcortical and left/right hemisphere contrasts.

    Science.gov (United States)

    Lange, G; Waked, W; Kirshblum, S; DeLuca, J

    2000-01-01

    To examine how organizational strategy at encoding influences visual memory performance in stroke patients. Case control study. Postacute rehabilitation hospital. Stroke patients with right hemisphere damage (n = 20) versus left hemisphere damage (n = 15), and stroke patients with cortical damage (n = 11) versus subcortical damage (n = 19). Organizational strategy scores, recall performance on the Rey-Osterrieth Complex Figure (ROCF). Results demonstrated significantly greater organizational impairment and less accurate copy performance (i.e., encoding of visuospatial information on the ROCF) in the right compared to the left hemisphere group, and in the cortical relative to the subcortical group. Organizational strategy and copy accuracy scores were significantly related to each other. The absolute amount of immediate and delayed recall was significantly associated with poor organizational strategy scores. However, relative to the amount of visual information originally encoded, memory performances did not differ between groups. These findings suggest that visual memory impairments after stroke may be caused by a lack of organizational strategy affecting information encoding, rather than an impairment in memory storage or retrieval.

  12. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  13. An Artificial Emotion Model For Visualizing Emotion of Characters

    OpenAIRE

    Junseok Ham; Chansun Jung; Junhyung Park; Jihye Ryeo; Ilju Ko

    2009-01-01

    It is hard to express emotion through only speech when we watch a character in a movie or a play because we cannot estimate the size, kind, and quantity of emotion. So this paper proposes an artificial emotion model for visualizing current emotion with color and location in emotion model. The artificial emotion model is designed considering causality of generated emotion, difference of personality, difference of continual emotional stimulus, and co-relation of various emo...

  14. Model Interpretation of Topological Spatial Analysis for the Visually Impaired (Blind Implemented in Google Maps

    Directory of Open Access Journals (Sweden)

    Marcelo Franco Porto

    2013-06-01

    Full Text Available The technological innovations promote the availability of geographic information on the Internet through Web GIS such as Google Earth and Google Maps. These systems contribute to the teaching and diffusion of geographical knowledge that instigates the recognition of the space we live in, leading to the creation of a spatial identity. In these products available on the Web, the interpretation and analysis of spatial information gives priority to one of the human senses: vision. Due to the fact that this representation of information is transmitted visually (image and vectors, a portion of the population is excluded from part of this knowledge because categories of analysis of geographic data such as borders, territory, and space can only be understood by people who can see. This paper deals with the development of a model of interpretation of topological spatial analysis based on the synthesis of voice and sounds that can be used by the visually impaired (blind.The implementation of a prototype in Google Maps and the usability tests performed are also examined. For the development work it was necessary to define the model of topological spatial analysis, focusing on computational implementation, which allows users to interpret the spatial relationships of regions (countries, states and municipalities, recognizing its limits, neighborhoods and extension beyond their own spatial relationships . With this goal in mind, several interface and usability guidelines were drawn up to be used by the visually impaired (blind. We conducted a detailed study of the Google Maps API (Application Programming Interface, which was the environment selected for prototype development, and studied the information available for the users of that system. The prototype was developed based on the synthesis of voice and sounds that implement the proposed model in C # language and in .NET environment. To measure the efficiency and effectiveness of the prototype, usability

  15. Macular Carotenoid Supplementation Improves Visual Performance, Sleep Quality, and Adverse Physical Symptoms in Those with High Screen Time Exposure.

    Science.gov (United States)

    Stringham, James M; Stringham, Nicole T; O'Brien, Kevin J

    2017-06-29

    The dramatic rise in the use of smartphones, tablets, and laptop computers over the past decade has raised concerns about potentially deleterious health effects of increased "screen time" (ST) and associated short-wavelength (blue) light exposure. We determined baseline associations and effects of 6 months' supplementation with the macular carotenoids (MC) lutein, zeaxanthin, and mesozeaxanthin on the blue-absorbing macular pigment (MP) and measures of sleep quality, visual performance, and physical indicators of excessive ST. Forty-eight healthy young adults with at least 6 h of daily near-field ST exposure participated in this placebo-controlled trial. Visual performance measures included contrast sensitivity, critical flicker fusion, disability glare, and photostress recovery. Physical indicators of excessive screen time and sleep quality were assessed via questionnaire. MP optical density (MPOD) was assessed via heterochromatic flicker photometry. At baseline, MPOD was correlated significantly with all visual performance measures ( p eye strain, eye fatigue, and all visual performance measures, versus placebo ( p < 0.05 for all). Increased MPOD significantly improves visual performance and, in turn, improves several undesirable physical outcomes associated with excessive ST. The improvement in sleep quality was not directly related to increases in MPOD, and may be due to systemic reduction in oxidative stress and inflammation.

  16. SeiVis: An interactive visual subsurface modeling application

    KAUST Repository

    Hollt, Thomas; Freiler, Wolfgang; Gschwantner, Fritz M.; Doleisch, Helmut; Heinemann, Gabor F.; Hadwiger, Markus

    2012-01-01

    The most important resources to fulfill today’s energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches. © 2012 IEEE.

  17. SeiVis: An interactive visual subsurface modeling application

    KAUST Repository

    Hollt, Thomas

    2012-12-01

    The most important resources to fulfill today’s energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches. © 2012 IEEE.

  18. SeiVis: An Interactive Visual Subsurface Modeling Application.

    Science.gov (United States)

    Hollt, T; Freiler, W; Gschwantner, F; Doleisch, H; Heinemann, G; Hadwiger, M

    2012-12-01

    The most important resources to fulfill today's energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches.

  19. ARTISTIC VISUALIZATION OF TRAJECTORY DATA USING CLOUD MODEL

    Directory of Open Access Journals (Sweden)

    T. Wu

    2017-09-01

    Full Text Available Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  20. Visual Environment for Rich Data Interpretation (VERDI) program for environmental modeling systems

    Science.gov (United States)

    VERDI is a flexible, modular, Java-based program used for visualizing multivariate gridded meteorology, emissions and air quality modeling data created by environmental modeling systems such as the CMAQ model and WRF.

  1. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  2. Generalized information fusion and visualization using spatial voting and data modeling

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.

    2013-05-01

    We present a novel and innovative information fusion and visualization framework for multi-source intelligence (multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be converted into numerical form for further processing downstream, followed by a short description of how this information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks through defender customized Cyber-cubes for network configuration and attack visualization.

  3. A picture is worth a thousand words: helping students visualize a conceptual model.

    Science.gov (United States)

    Johnson, S E

    1989-01-01

    Communicating the functional applicability of a conceptual framework to nursing students can be a challenge of considerable magnitude. Nurse educators are convinced that nursing practice and process should stem from theory. However, when attempting to teach this, many educators have struggled with the expressions of confused, skeptical students. To provide a better understanding of a nursing model, the author uses a visual representation of the Neuman Systems Model variables. The student can then visualize application of the Model to nursing practice.

  4. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  5. Visual performance in cataract patients with low levels of postoperative astigmatism: full correction versus spherical equivalent correction

    Directory of Open Access Journals (Sweden)

    Lehmann RP

    2012-03-01

    Full Text Available Robert P Lehmann1, Diane M Houtman21Lehmann Eye Center, Nacogdoches, TX, 2Alcon Research Ltd, Fort Worth, TX, USAPurpose: To evaluate whether visual performance could be improved in pseudophakic subjects by correcting low levels of postoperative astigmatism.Methods: An exploratory, noninterventional study was conducted using subjects who had been implanted with an aspheric intraocular lens and had 0.5–0.75 diopter postoperative astigmatism. Monocular visual performance using full correction was compared with visual performance using spherical equivalent correction. Testing consisted of high- and low-contrast visual acuity, contrast sensitivity, and reading acuity and speed using the Radner Reading Charts.Results: Thirty-eight of 40 subjects completed testing. Visual acuities at three contrast levels (100%, 25%, and 9% were significantly better using full correction than when using spherical equivalent correction (all P < 0.001. For contrast sensitivity testing under photopic, mesopic, and mesopic with glare conditions, only one out of twelve outcomes demonstrated a significant improvement with full correction compared with spherical equivalent correction (at six cycles per degree under mesopic without glare conditions, P = 0.046. Mean reading speed was numerically faster with full correction across all print sizes, reaching statistical significance at logarithm of the reading acuity determination (logRAD 0.2, 0.7, and 1.1 (P , 0.05. Statistically significant differences also favored full correction in logRAD score (P = 0.0376, corrected maximum reading speed (P < 0.001, and logarithm of the minimum angle of resolution/logRAD ratio (P < 0.001.Conclusions: In this study of pseudophakic subjects with low levels of postoperative astigmatism, full correction yielded significantly better reading performance and high- and low-contrast visual acuity than spherical equivalent correction, suggesting that cataractous patients may benefit from surgical

  6. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  7. The influence of assistive technology devices on the performance of activities by visually impaired

    Directory of Open Access Journals (Sweden)

    Suzana Rabello

    2014-04-01

    Full Text Available Objective: To establish the influence of assistive technology devices (ATDs on the performance of activities by visually impaired schoolchildren in the resource room. Methods: A qualitative study that comprised observation and an educational intervention in the resource room. The study population comprised six visually impaired schoolchildren aged 12 to 14 years old. The participants were subjected to an eye examination, prescribed ATDs comprising optical and non-optical devices, and provided an orientation on the use of computers. The participants were assessed based on eye/object distance, font size, and time to read a computer screen and printed text. Results: The ophthalmological conditions included corneal opacity, retinochoroiditis, retinopathy of prematurity, aniridia, and congenital cataracts. Far visual acuity varied from 20/200 to 20/800 and near visual acuity from 0.8 to 6 M. Telescopes, spherical lenses, and support magnifying glasses were prescribed. Three out of five participants with low vision after intervention could decrease the font size on the screen computer, and most participants (83.3% reduced their reading time at the second observation session. Relative to the printed text, all the participants with low vision were able to read text written in smaller font sizes and reduced their reading time at the second observation session. Conclusion: Reading skills improved after the use of ATDs, which allowed the participants to perform their school tasks equally to their classmates.

  8. Simulating the role of visual selective attention during the development of perceptual completion.

    Science.gov (United States)

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  9. Visual search performance in infants associates with later ASD diagnosis.

    Science.gov (United States)

    Cheung, C H M; Bedford, R; Johnson, M H; Charman, T; Gliga, T

    2018-01-01

    An enhanced ability to detect visual targets amongst distractors, known as visual search (VS), has often been documented in Autism Spectrum Disorders (ASD). Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD) and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD) had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Performance of dynamic safety barriers-Structuring, modelling and visualization

    OpenAIRE

    Wikdahl, Olga

    2014-01-01

    The main objective of this master thesis is to discuss performance of dynamic safety barriers. A comprehensive literature review is performed in order to get understanding what dynamic safety barrier is. Three different concepts of dynamic safety barriers based on various meanings of dynamic were derived from the literature review: - dynamic safety barriers related to motion or physical force - dynamic safety barriers as updated barriers from dynamic risk analysis - dynamic safety ...

  11. The Effects of an Auditory Versus a Visual Presentation of Information on Soldier Performance

    National Research Council Canada - National Science Library

    Glumm, Monica

    1999-01-01

    This report describes a field study designed to measure the effects of an auditory versus a visual presentation of position information on soldier performance of land navigation and target acquisition tasks...

  12. A mouse model of visual perceptual learning reveals alterations in neuronal coding and dendritic spine density in the visual cortex

    Directory of Open Access Journals (Sweden)

    Yan eWang

    2016-03-01

    Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  13. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    Science.gov (United States)

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  14. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  15. Visual imagery and the user model applied to fuel handling at EBR-II

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S.A.

    1995-01-01

    The material presented in this paper is based on two studies involving visual display designs and the user's perspective model of a system. The studies involved a methodology known as Neuro-Linguistic Programming (NLP), and its use in expanding design choices which included the ''comfort parameters'' and ''perspective reality'' of the user's model of the world. In developing visual displays for the EBR-II fuel handling system, the focus would be to incorporate the comfort parameters that overlap from each of the representation systems: visual, auditory and kinesthetic then incorporate the comfort parameters of the most prominent group of the population, and last, blend in the other two representational system comfort parameters. The focus of this informal study was to use the techniques of meta-modeling and synesthesia to develop a virtual environment that closely resembled the operator's perspective of the fuel handling system of Argonne's Experimental Breeder Reactor - II. An informal study was conducted using NLP as the behavioral model in a v reality (VR) setting

  16. Titan I propulsion system modeling and possible performance improvements

    Science.gov (United States)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  17. Differential up-regulation of Vesl-1/Homer 1 protein isoforms associated with decline in visual performance in a preclinical glaucoma model

    Science.gov (United States)

    Kaja, Simon; Naumchuk, Yuliya; Grillo, Stephanie L.; Borden, Priscilla K.; Koulen, Peter

    2014-01-01

    Glaucoma is a multifactorial progressive ocular pathology, clinically presenting with damage to the retina and optic nerve, ultimately leading to blindness. Retinal ganglion cell loss in glaucoma ultimately results in vision loss. Vesl/Homer proteins are scaffolding proteins that are critical for maintaining synaptic integrity by clustering, organizing and functionally regulating synaptic proteins. Current anti-glaucoma therapies target IOP as the sole modifiable clinical parameters. Long-term pharmacotherapy and surgical treatment do not prevent gradual visual field loss as the disease progresses, highlighting the need for new complementary, alternative and comprehensive treatment approaches. Vesl/Homer expression was measured in the retinae of DBA/2J mice, a preclinical genetic glaucoma model with spontaneous mutations resulting in a phenotype reminiscent of chronic human pigmentary glaucoma. Vesl/Homer proteins were differentially expressed in the aged, glaucomatous DBA/2J retina, both at the transcriptional and translational level. Immunoreactivity for the long Vesl-1L/Homer 1c isoform, but not of the immediate early gene product Vesl-1S/Homer 1a was increased in the synaptic layers of the retina. This increased protein level of Vesl-1L/Homer 1c was correlated with phenotypes of increased disease severity and a decrease in visual performance. The increased expression of Vesl-1L/Homer 1c in the glaucomatous retina likely results in increased intracellular Ca2+ release through enhancement of synaptic coupling. The ensuing Ca2+ toxicity may thus activate neurodegenerative pathways and lead to the progressive loss of synaptic function in glaucoma. Our data suggest that higher levels of Vesl-1L/Homer 1c generate a more severe disease phenotype and may represent a viable target for therapy development. PMID:24219919

  18. What are the visual features underlying rapid object recognition?

    Directory of Open Access Journals (Sweden)

    Sébastien M Crouzet

    2011-11-01

    Full Text Available Research progress in machine vision has been very significant in recent years. Robust face detection and identification algorithms are already readily available to consumers, and modern computer vision algorithms for generic object recognition are now coping with the richness and complexity of natural visual scenes. Unlike early vision models of object recognition that emphasized the role of figure-ground segmentation and spatial information between parts, recent successful approaches are based on the computation of loose collections of image features without prior segmentation or any explicit encoding of spatial relations. While these models remain simplistic models of visual processing, they suggest that, in principle, bottom-up activation of a loose collection of image features could support the rapid recognition of natural object categories and provide an initial coarse visual representation before more complex visual routines and attentional mechanisms take place. Focusing on biologically-plausible computational models of (bottom-up pre-attentive visual recognition, we review some of the key visual features that have been described in the literature. We discuss the consistency of these feature-based representations with classical theories from visual psychology and test their ability to account for human performance on a rapid object categorization task.

  19. Visual momentum: an example of cognitive models applied to interface design

    International Nuclear Information System (INIS)

    Woods, D.D.

    1982-01-01

    The growth of computer applications has radically changed the nature of the man-machine interface. Through increased automation, the nature of the human's task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities (e.g., problem solving and decision making). The result is a need to improve the cognitive coupling of person and machine. The goal of this paper is to describe how knowledge from cognitive psychology can be used to provide guidance to display system designers and to solve human performance problems in person-machine systems. The mechanism is to explore one example of a principle of man-machine interaction - visual momentum - that was developed on the basis of a general model of human front-end cognitive processing

  20. Advances and limitations of visual conditioning protocols in harnessed bees.

    Science.gov (United States)

    Avarguès-Weber, Aurore; Mota, Theo

    2016-10-01

    Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Stress Induction and Visual Working Memory Performance: The Effects of Emotional and Non-Emotional Stimuli

    Directory of Open Access Journals (Sweden)

    Zahra Khayyer

    2017-05-01

    Full Text Available Background Some studies have shown working memory impairment following stressful situations. Also, researchers have found that working memory performance depends on many different factors such as emotional load of stimuli and gender. Objectives The present study aimed to determine the effects of stress induction on visual working memory (VWM performance among female and male university students. Methods This quasi-experimental research employed a posttest with only control group design (within-group study. A total of 62 university students (32 males and 30 females were randomly selected and allocated to experimental and control groups (mean age of 23.73. Using cold presser test (CPT, stress was induced and then, an n-back task was implemented to evaluate visual working memory function (such as the number of true items, time reactions, and the number of wrong items through emotional and non-emotional pictures. 100 pictures were selected from the international affective picture system (IASP with different valences. Results Results showed that stress impaired different visual working memory functions (P < 0.002 for true scores, P < 0.001 for reaction time, and P < 0.002 for wrong items. Conclusions In general, stress significantly decreases the VWM performances. On the one hand, females were strongly impressed by stress more than males and on the other hand, the VWM performance was better for emotional stimuli than non-emotional stimuli.

  2. A low complexity visualization tool that helps to perform complex systems analysis

    International Nuclear Information System (INIS)

    Beiro, M G; Alvarez-Hamelin, J I; Busch, J R

    2008-01-01

    In this paper, we present an extension of large network visualization (LaNet-vi), a tool to visualize large scale networks using the k-core decomposition. One of the new features is how vertices compute their angular position. While in the later version it is done using shell clusters, in this version we use the angular coordinate of vertices in higher k-shells, and arrange the highest shell according to a cliques decomposition. The time complexity goes from O(n√n) to O(n) upon bounds on a heavy-tailed degree distribution. The tool also performs a k-core-connectivity analysis, highlighting vertices that are not k-connected; e.g. this property is useful to measure robustness or quality of service (QoS) capabilities in communication networks. Finally, the actual version of LaNet-vi can draw labels and all the edges using transparencies, yielding an accurate visualization. Based on the obtained figure, it is possible to distinguish different sources and types of complex networks at a glance, in a sort of 'network iris-print'.

  3. A low complexity visualization tool that helps to perform complex systems analysis

    Science.gov (United States)

    Beiró, M. G.; Alvarez-Hamelin, J. I.; Busch, J. R.

    2008-12-01

    In this paper, we present an extension of large network visualization (LaNet-vi), a tool to visualize large scale networks using the k-core decomposition. One of the new features is how vertices compute their angular position. While in the later version it is done using shell clusters, in this version we use the angular coordinate of vertices in higher k-shells, and arrange the highest shell according to a cliques decomposition. The time complexity goes from O(n\\sqrt n) to O(n) upon bounds on a heavy-tailed degree distribution. The tool also performs a k-core-connectivity analysis, highlighting vertices that are not k-connected; e.g. this property is useful to measure robustness or quality of service (QoS) capabilities in communication networks. Finally, the actual version of LaNet-vi can draw labels and all the edges using transparencies, yielding an accurate visualization. Based on the obtained figure, it is possible to distinguish different sources and types of complex networks at a glance, in a sort of 'network iris-print'.

  4. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  5. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  6. Psyplot: Visualizing rectangular and triangular Climate Model Data with Python

    Science.gov (United States)

    Sommer, Philipp

    2016-04-01

    The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.

  7. Language, visuality, and the body. On the return of discourse in contemporary performance

    Directory of Open Access Journals (Sweden)

    Vangelis Athanassopoulos

    2013-12-01

    Full Text Available This article deals with the return of discourse in experimental performance-based artistic practices. By putting this return in a historical perspective, we wish to address the questions it raises on the relation between language, image, and the body, resituating the avant-garde heritage in a contemporary context where intermediality and transdisciplinarity tend to become the norm rather than the exception. The discussion of the status and function of discourse in this context calls on the field of theatre and its ambivalent role in modern aesthetics, both as a specifically determined artistic discipline, and as a blending of heterogeneous elements, which defy the assigned limitations of creative practice. The confrontation of Antonin Artaud's writings with Michael Fried's conception of theatricality aims to bring to the fore the cultural transformations and historical paradoxes which inform the shift from theatre to performance as an experimental field situated “between” the arts and embracing a wide range of practices, from visual arts to music and dance. The case of lecture-performance enables us to call attention to the internal contradictions of the “educational” interpretation of such experimental practices and their autonomization inside the limits of a specific artistic genre. The main argument is that, despite the plurality of its origins and its claims to intermediality and transdisciplinarity, lecture-performance as a genre is attracted by or gravitates around the extended field of the visual arts. By focusing on the work of Jerôme Bel, Noé Soulier, Giuseppe Chico, Barbara Matijevic, and Carole Douillard, we stress some of the ways contemporary discursive strategies enable to displace visual spectacle toward a conception of the body as the limit of signification.

  8. Memory-guided saccade processing in visual form agnosia (patient DF).

    Science.gov (United States)

    Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika

    2010-01-01

    According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.

  9. An Integrated Biomechanical Model for Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2012-01-01

    When gravitational unloading occurs upon entry to space, astronauts experience a major shift in the distribution of their bodily fluids, with a net headward movement. Measurements have shown that intraocular pressure spikes, and there is a strong suspicion that intracranial pressure also rises. Some astronauts in both short- and long-duration spaceflight develop visual acuity changes, which may or may not reverse upon return to earth gravity. To date, of the 36 U.S. astronauts who have participated in long-duration space missions on the International Space Station, 15 crew members have developed minor to severe visual decrements and anatomical changes. These ophthalmic changes include hyperopic shift, optic nerve distension, optic disc edema, globe flattening, choroidal folds, and elevated cerebrospinal fluid pressure. In order to understand the physical mechanisms behind these phenomena, NASA is developing an integrated model that appropriately captures whole-body fluids transport through lumped-parameter models for the cerebrospinal and cardiovascular systems. This data feeds into a finite element model for the ocular globe and retrobulbar subarachnoid space through time-dependent boundary conditions. Although tissue models and finite element representations of the corneo-scleral shell, retina, choroid and optic nerve head have been integrated to study pathological conditions such as glaucoma, the retrobulbar subarachnoid space behind the eye has received much less attention. This presentation will describe the development and scientific foundation of our holistic model.

  10. 3-dimensional orthodontics visualization system with dental study models and orthopantomograms

    Science.gov (United States)

    Zhang, Hua; Ong, S. H.; Foong, K. W. C.; Dhar, T.

    2005-04-01

    The aim of this study is to develop a system that provides 3-dimensional visualization of orthodontic treatments. Dental plaster models and corresponding orthopantomogram (dental panoramic tomogram) are first digitized and fed into the system. A semi-auto segmentation technique is applied to the plaster models to detect the dental arches, tooth interstices and gum margins, which are used to extract individual crown models. 3-dimensional representation of roots, generated by deforming generic tooth models with orthopantomogram using radial basis functions, is attached to corresponding crowns to enable visualization of complete teeth. An optional algorithm to close the gaps between deformed roots and actual crowns by using multi-quadratic radial basis functions is also presented, which is capable of generating smooth mesh representation of complete 3-dimensional teeth. User interface is carefully designed to achieve a flexible system with as much user friendliness as possible. Manual calibration and correction is possible throughout the data processing steps to compensate occasional misbehaviors of automatic procedures. By allowing the users to move and re-arrange individual teeth (with their roots) on a full dentition, this orthodontic visualization system provides an easy and accurate way of simulation and planning of orthodontic treatment. Its capability of presenting 3-dimensional root information with only study models and orthopantomogram is especially useful for patients who do not undergo CT scanning, which is not a routine procedure in most orthodontic cases.

  11. Implementation of ICARE learning model using visualization animation on biotechnology course

    Science.gov (United States)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  12. Alzheimer disease: functional abnormalities in the dorsal visual pathway.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    PURPOSE: To evaluate whether patients with Alzheimer disease (AD) have altered activation compared with age-matched healthy control (HC) subjects during a task that typically recruits the dorsal visual pathway. MATERIALS AND METHODS: The study was performed in accordance with the Declaration of Helsinki, with institutional ethics committee approval, and all subjects provided written informed consent. Two tasks were performed to investigate neural function: face matching and location matching. Twelve patients with mild AD and 14 age-matched HC subjects were included. Brain activation was measured by using functional magnetic resonance imaging. Group statistical analyses were based on a mixed-effects model corrected for multiple comparisons. RESULTS: Task performance was not statistically different between the two groups, and within groups there were no differences in task performance. In the HC group, the visual perception tasks selectively activated the visual pathways. Conversely in the AD group, there was no selective activation during performance of these same tasks. Along the dorsal visual pathway, the AD group recruited additional regions, primarily in the parietal and frontal lobes, for the location-matching task. There were no differences in activation between groups during the face-matching task. CONCLUSION: The increased activation in the AD group may represent a compensatory mechanism for decreased processing effectiveness in early visual areas of patients with AD. The findings support the idea that the dorsal visual pathway is more susceptible to putative AD-related neuropathologic changes than is the ventral visual pathway.

  13. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    Science.gov (United States)

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  14. Visual tracking speed is related to basketball-specific measures of performance in NBA players.

    Science.gov (United States)

    Mangine, Gerald T; Hoffman, Jay R; Wells, Adam J; Gonzalez, Adam M; Rogowski, Joseph P; Townsend, Jeremy R; Jajtner, Adam R; Beyer, Kyle S; Bohner, Jonathan D; Pruna, Gabriel J; Fragala, Maren S; Stout, Jeffrey R

    2014-09-01

    The purpose of this study was to determine the relationship between visual tracking speed (VTS) and reaction time (RT) on basketball-specific measures of performance. Twelve professional basketball players were tested before the 2012-13 season. Visual tracking speed was obtained from 1 core session (20 trials) of the multiple object tracking test, whereas RT was measured by fixed- and variable-region choice reaction tests, using a light-based testing device. Performance in VTS and RT was compared with basketball-specific measures of performance (assists [AST]; turnovers [TO]; assist-to-turnover ratio [AST/TO]; steals [STL]) during the regular basketball season. All performance measures were reported per 100 minutes played. Performance differences between backcourt (guards; n = 5) and frontcourt (forward/centers; n = 7) positions were also examined. Relationships were most likely present between VTS and AST (r = 0.78; p basketball-specific performance measures. Backcourt players were most likely to outperform frontcourt players in AST and very likely to do so for VTS, TO, and AST/TO. In conclusion, VTS seems to be related to a basketball player's ability to see and respond to various stimuli on the basketball court that results in more positive plays as reflected by greater number of AST and STL and lower turnovers.

  15. Visual Constructive and Visual-Motor Skills in Deaf Native Signers

    Science.gov (United States)

    Hauser, Peter C.; Cohen, Julie; Dye, Matthew W. G.; Bavelier, Daphne

    2007-01-01

    Visual constructive and visual-motor skills in the deaf population were investigated by comparing performance of deaf native signers (n = 20) to that of hearing nonsigners (n = 20) on the Beery-Buktenica Developmental Test of Visual-Motor Integration, Rey-Osterrieth Complex Figure Test, Wechsler Memory Scale Visual Reproduction subtest, and…

  16. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

    Science.gov (United States)

    Wahn, Basil; Schwandt, Jessika; Krüger, Matti; Crafa, Daina; Nunnendorf, Vanessa; König, Peter

    2016-06-01

    In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.

  17. Eksplorasi Pose dalam Pemotretan Model Melalui Kajian Visual Relief Karmawibhangga

    Directory of Open Access Journals (Sweden)

    Noor Latif CM

    2015-10-01

    Full Text Available Karmawibhangga Relief panel located at the foot of Borobudur is a relic of the visual artifacts that contains fragments of past life with a very high historical value. 160 Karmawibhangga panels tell the reality of people's lives at the time wrapped with a moral message plight. The relief provides a lot of visual references to be excavated and reconstructed again for the benefit of the creative industries today. This research digged one small part of the masterpieces of the past through photography. Understanding visual artists who create these reliefs will be beauty in the show gestures in building a very interesting story to be re-examined. Visual communication through gestures in relief Karmawibhangga enables new assumptions about body language dialect differences to current conditions. Through model genre photography, it is very useful in connection with the development of local nuanced scientific photography. Efforts to develop the traditions and culture through new media are expected to be creative commodity with a very strong product differentiation. 

  18. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-09-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error associated with advanced control room equipment and configurations, (ii) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (iii) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms.

  19. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    International Nuclear Information System (INIS)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-01-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (1) the estimation of human error associated with advanced control room equipment and configurations, (2) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (3) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms

  20. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  1. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  2. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    Science.gov (United States)

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2

  3. Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations

    KAUST Repository

    Landge, A. G.

    2012-12-01

    The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D-s performance on an IBM Blue Gene/P system. © 1995-2012 IEEE.

  4. Dynamic visual noise interferes with storage in visual working memory.

    Science.gov (United States)

    Dean, Graham M; Dewhurst, Stephen A; Whittaker, Annalise

    2008-01-01

    Several studies have demonstrated that dynamic visual noise (DVN) does not interfere with memory for random matrices. This has led to suggestions that (a) visual working memory is distinct from imagery, and (b) visual working memory is not a gateway between sensory input and long-term storage. A comparison of the interference effects of DVN with memory for matrices and colored textures shows that DVN can interfere with visual working memory, probably at a level of visual detail not easily supported by long-term memory structures or the recoding of the visual pattern elements. The results support a gateway model of visuospatial working memory and raise questions about the most appropriate ways to measure and model the different levels of representation of information that can be held in visual working memory.

  5. Bingo! Externally-Supported Performance Intervention for Deficient Visual Search in Normal Aging, Parkinson’s Disease and Alzheimer’s Disease

    Science.gov (United States)

    Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice

    2011-01-01

    External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941

  6. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    Science.gov (United States)

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  7. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  8. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    Science.gov (United States)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  9. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  10. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  11. Enhancing reading performance through action video games: the role of visual attention span.

    Science.gov (United States)

    Antzaka, A; Lallier, M; Meyer, S; Diard, J; Carreiras, M; Valdois, S

    2017-11-06

    Recent studies reported that Action Video Game-AVG training improves not only certain attentional components, but also reading fluency in children with dyslexia. We aimed to investigate the shared attentional components of AVG playing and reading, by studying whether the Visual Attention (VA) span, a component of visual attention that has previously been linked to both reading development and dyslexia, is improved in frequent players of AVGs. Thirty-six French fluent adult readers, matched on chronological age and text reading proficiency, composed two groups: frequent AVG players and non-players. Participants performed behavioural tasks measuring the VA span, and a challenging reading task (reading of briefly presented pseudo-words). AVG players performed better on both tasks and performance on these tasks was correlated. These results further support the transfer of the attentional benefits of playing AVGs to reading, and indicate that the VA span could be a core component mediating this transfer. The correlation between VA span and pseudo-word reading also supports the involvement of VA span even in adult reading. Future studies could combine VA span training with defining features of AVGs, in order to build a new generation of remediation software.

  12. An analysis of mathematical connection ability based on student learning style on visualization auditory kinesthetic (VAK) learning model with self-assessment

    Science.gov (United States)

    Apipah, S.; Kartono; Isnarto

    2018-03-01

    This research aims to analyze the quality of VAK learning with self-assessment toward the ability of mathematical connection performed by students and to analyze students’ mathematical connection ability based on learning styles in VAK learning model with self-assessment. This research applies mixed method type with concurrent embedded design. The subject of this research consists of VIII grade students from State Junior High School 9 Semarang who apply visual learning style, auditory learning style, and kinesthetic learning style. The data of learning style is collected by using questionnaires, the data of mathematical connection ability is collected by performing tests, and the data of self-assessment is collected by using assessment sheets. The quality of learning is qualitatively valued from planning stage, realization stage, and valuation stage. The result of mathematical connection ability test is analyzed quantitatively by mean test, conducting completeness test, mean differentiation test, and mean proportional differentiation test. The result of the research shows that VAK learning model results in well-qualified learning regarded from qualitative and quantitative sides. Students with visual learning style perform the highest mathematical connection ability, students with kinesthetic learning style perform average mathematical connection ability, and students with auditory learning style perform the lowest mathematical connection ability.

  13. Measuring and Modeling Shared Visual Attention

    Science.gov (United States)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  14. The Effect of Visual Representation Style in Problem-Solving : A Perspective from Cognitive Processes

    NARCIS (Netherlands)

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a

  15. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex

    OpenAIRE

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and p...

  16. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    Science.gov (United States)

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  17. D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges

    Science.gov (United States)

    Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.

    2017-11-01

    3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.

  18. Visual imagery and the user model applied to fuel handling at EBR-II

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-06-01

    The material presented in this paper is based on two studies involving visual display designs and the user`s perspective model of a system. The studies involved a methodology known as Neuro-Linguistic Programming (NLP), and its use in expanding design choices which included the ``comfort parameters`` and ``perspective reality`` of the user`s model of the world. In developing visual displays for the EBR-II fuel handling system, the focus would be to incorporate the comfort parameters that overlap from each of the representation systems: visual, auditory and kinesthetic then incorporate the comfort parameters of the most prominent group of the population, and last, blend in the other two representational system comfort parameters. The focus of this informal study was to use the techniques of meta-modeling and synesthesia to develop a virtual environment that closely resembled the operator`s perspective of the fuel handling system of Argonne`s Experimental Breeder Reactor - II. An informal study was conducted using NLP as the behavioral model in a v reality (VR) setting.

  19. KENO3D visualization tool for KENO V.a geometry models

    International Nuclear Information System (INIS)

    Bowman, S.M.; Horwedel, J.E.

    1999-01-01

    The standardized computer analyses for licensing evaluations (SCALE) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a three-dimensional Monte Carlo criticality computer code. Criticality safety analysis often require detailed modeling of complex geometries. Checking the accuracy of these models can be enhanced by effective visualization tools. To address this need, ORNL has recently developed a powerful state-of-the-art visualization tool called KENO3D that enables KENO V.a users to interactively display their three-dimensional geometry models. The interactive options include the following: (1) having shaded or wireframe images; (2) showing standard views, such as top view, side view, front view, and isometric three-dimensional view; (3) rotating the model; (4) zooming in on selected locations; (5) selecting parts of the model to display; (6) editing colors and displaying legends; (7) displaying properties of any unit in the model; (8) creating cutaway views; (9) removing units from the model; and (10) printing image or saving image to common graphics formats

  20. Using Interactive Visualization to Analyze Solid Earth Data and Geodynamics Models

    Science.gov (United States)

    Kellogg, L. H.; Kreylos, O.; Billen, M. I.; Hamann, B.; Jadamec, M. A.; Rundle, J. B.; van Aalsburg, J.; Yikilmaz, M. B.

    2008-12-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. Major projects such as EarthScope and GeoEarthScope are producing the data needed to characterize the structure and kinematics of Earth's surface and interior at unprecedented resolution. At the same time, high-performance computing enables high-precision and fine- detail simulation of geodynamics processes, complementing the observational data. To facilitate interpretation and analysis of these datasets, to evaluate models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. VR has traditionally been used primarily as a presentation tool allowing active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for accelerated scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. Our approach to VR takes advantage of the specialized skills of geoscientists who are trained to interpret geological and geophysical data generated from field observations. Interactive tools allow the scientist to explore and interpret geodynamic models, tomographic models, and topographic observations, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulations or field observations. The use of VR technology enables us to improve our interpretation of crust and mantle structure and of geodynamical processes. Mapping tools based on computer visualization allow virtual "field studies" in inaccessible regions, and an interactive tool allows us to construct digital fault models for use in numerical models. Using the interactive tools on a high-end platform such as an immersive virtual reality

  1. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  2. The impact of visual layout factors on performance in Web pages: a cross-language study.

    Science.gov (United States)

    Parush, Avi; Shwarts, Yonit; Shtub, Avy; Chandra, M Jeya

    2005-01-01

    Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.

  3. The contributions of visual and central attention to visual working memory.

    Science.gov (United States)

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  4. Software complex for geophysical data visualization

    Science.gov (United States)

    Kryukov, Ilya A.; Tyugin, Dmitry Y.; Kurkin, Andrey A.; Kurkina, Oxana E.

    2013-04-01

    The effectiveness of current research in geophysics is largely determined by the degree of implementation of the procedure of data processing and visualization with the use of modern information technology. Realistic and informative visualization of the results of three-dimensional modeling of geophysical processes contributes significantly into the naturalness of physical modeling and detailed view of the phenomena. The main difficulty in this case is to interpret the results of the calculations: it is necessary to be able to observe the various parameters of the three-dimensional models, build sections on different planes to evaluate certain characteristics and make a rapid assessment. Programs for interpretation and visualization of simulations are spread all over the world, for example, software systems such as ParaView, Golden Software Surfer, Voxler, Flow Vision and others. However, it is not always possible to solve the problem of visualization with the help of a single software package. Preprocessing, data transfer between the packages and setting up a uniform visualization style can turn into a long and routine work. In addition to this, sometimes special display modes for specific data are required and existing products tend to have more common features and are not always fully applicable to certain special cases. Rendering of dynamic data may require scripting languages that does not relieve the user from writing code. Therefore, the task was to develop a new and original software complex for the visualization of simulation results. Let us briefly list of the primary features that are developed. Software complex is a graphical application with a convenient and simple user interface that displays the results of the simulation. Complex is also able to interactively manage the image, resize the image without loss of quality, apply a two-dimensional and three-dimensional regular grid, set the coordinate axes with data labels and perform slice of data. The

  5. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    Science.gov (United States)

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  6. Connection Between the Originality Level of Pupils' Visual Expression in Visual Arts Lessons and Their Level of Tolerance for Diversity

    Directory of Open Access Journals (Sweden)

    Miroslav Huzjak

    2017-09-01

    Full Text Available The aim of this research was to examine the connection between the originality level in children's expression during visual art lessons and their level of tolerance for difference. The participants comprised primary school pupils from grades one, two and three, a total of 110. It was confirmed that there was a statistically significant difference between the pupils who had an introduction to the lesson using the didactic model of visual problembased teaching and those who had not. Learning and setting art terminology, the analysis of motifs and explanation, as well as demonstration of art techniques resulted in a higher level of creativity in visual performance, as well as a higher level of tolerance. It can be concluded that, with the proper choice of didactic models in teaching the visual arts, a wide range of pupil attitudes and beliefs can be improved.

  7. Conceptual and visual features contribute to visual memory for natural images.

    Directory of Open Access Journals (Sweden)

    Gesche M Huebner

    Full Text Available We examined the role of conceptual and visual similarity in a memory task for natural images. The important novelty of our approach was that visual similarity was determined using an algorithm [1] instead of being judged subjectively. This similarity index takes colours and spatial frequencies into account. For each target, four distractors were selected that were (1 conceptually and visually similar, (2 only conceptually similar, (3 only visually similar, or (4 neither conceptually nor visually similar to the target image. Participants viewed 219 images with the instruction to memorize them. Memory for a subset of these images was tested subsequently. In Experiment 1, participants performed a two-alternative forced choice recognition task and in Experiment 2, a yes/no-recognition task. In Experiment 3, testing occurred after a delay of one week. We analyzed the distribution of errors depending on distractor type. Performance was lowest when the distractor image was conceptually and visually similar to the target image, indicating that both factors matter in such a memory task. After delayed testing, these differences disappeared. Overall performance was high, indicating a large-capacity, detailed visual long-term memory.

  8. An amalgamation of 3D city models in urban air quality modelling for improving visual impact analysis

    DEFF Research Database (Denmark)

    Ujang, U.; Anton, F.; Ariffin, A.

    2015-01-01

    is predominantly vehicular engines, the situation will become worse when pollutants are trapped between buildings and disperse inside the street canyon and move vertically to create a recirculation vortex. Studying and visualizing the recirculation zone in 3D visualization is conceivable by using 3D city models......,engineers and policy makers to design the street geometry (building height and width, green areas, pedestrian walks, roads width, etc.)....

  9. Rural–Urban Disparity in Students’ Academic Performance in Visual Arts Education

    Directory of Open Access Journals (Sweden)

    Nana Afia Amponsaa Opoku-Asare

    2015-12-01

    Full Text Available Rural–urban disparity in economic and social development in Ghana has led to disparities in educational resources and variations in students’ achievement in different parts of the country. Nonetheless, senior high schools (SHSs in rural and urban schools follow the same curriculum, and their students write the same West Africa Senior Secondary Certificate Examination (WASSCE, which qualifies them to access higher education in Ghana’s public universities. Urban SHSs are also recognized nationwide as good schools where students make it to university. Moreover, performance patterns with regard to admission of SHS graduates into university also vary between rural and urban schools; consequently, some parents do everything to get their children in urban SHSs, even consenting to placement in visual arts, a program deemed appropriate only for academically weak students. This study therefore adopted the qualitative-quantitative research approach with interview, observation, and questionnaire administration to investigate the critical factors that affect academic performance of SHS students, particularly those in visual arts as case study. Findings from six public SHSs in Kumasi—two each in rural, peri-urban, and urban areas—revealed that urban schools perform better than rural and peri-urban schools because they attract and admit junior high school graduates with excellent Basic Education Certificate Examination (BECE grades, have better infrastructure, more qualified teachers, prestigious names, and character that motivate their students to do well. This suggests that bridging the rural–urban gap in educational resources could promote quality teaching and learning, and thereby raise academic achievement for SHS students in Ghana.

  10. Scientific Visualization & Modeling for Earth Systems Science Education

    Science.gov (United States)

    Chaudhury, S. Raj; Rodriguez, Waldo J.

    2003-01-01

    Providing research experiences for undergraduate students in Earth Systems Science (ESS) poses several challenges at smaller academic institutions that might lack dedicated resources for this area of study. This paper describes the development of an innovative model that involves students with majors in diverse scientific disciplines in authentic ESS research. In studying global climate change, experts typically use scientific visualization techniques applied to remote sensing data collected by satellites. In particular, many problems related to environmental phenomena can be quantitatively addressed by investigations based on datasets related to the scientific endeavours such as the Earth Radiation Budget Experiment (ERBE). Working with data products stored at NASA's Distributed Active Archive Centers, visualization software specifically designed for students and an advanced, immersive Virtual Reality (VR) environment, students engage in guided research projects during a structured 6-week summer program. Over the 5-year span, this program has afforded the opportunity for students majoring in biology, chemistry, mathematics, computer science, physics, engineering and science education to work collaboratively in teams on research projects that emphasize the use of scientific visualization in studying the environment. Recently, a hands-on component has been added through science student partnerships with school-teachers in data collection and reporting for the GLOBE Program (GLobal Observations to Benefit the Environment).

  11. Assessing Sexual Dicromatism: The Importance of Proper Parameterization in Tetrachromatic Visual Models.

    Directory of Open Access Journals (Sweden)

    Pierre-Paul Bitton

    Full Text Available Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1 can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1

  12. Fluorescence Imaging and Streamline Visualization of Hypersonic Flow over Rapid Prototype Wind-Tunnel Models

    Science.gov (United States)

    Danehy, Paul M.; Alderfer, David W.; Inman, Jennifer A.; Berger, Karen T.; Buck, Gregory M.; Schwartz, Richard J.

    2008-01-01

    Reentry models for use in hypersonic wind tunnel tests were fabricated using a stereolithography apparatus. These models were produced in one day or less, which is a significant time savings compared to the manufacture of ceramic or metal models. The models were tested in the NASA Langley Research Center 31-Inch Mach 10 Air Tunnel. Only a few of the models survived repeated tests in the tunnel, and several failure modes of the models were identified. Planar laser-induced fluorescence (PLIF) of nitric oxide (NO) was used to visualize the flowfields in the wakes of these models. Pure NO was either seeded through tubes plumbed into the model or via a tube attached to the strut holding the model, which provided localized addition of NO into the model s wake through a porous metal cylinder attached to the end of the tube. Models included several 2- inch diameter Inflatable Reentry Vehicle Experiment (IRVE) models and 5-inch diameter Crew Exploration Vehicle (CEV) models. Various model configurations and NO seeding methods were used, including a new streamwise visualization method based on PLIF. Virtual Diagnostics Interface (ViDI) technology, developed at NASA Langley Research Center, was used to visualize the data sets in post processing. The use of calibration "dotcards" was investigated to correct for camera perspective and lens distortions in the PLIF images.

  13. Influencias del desarrollo de las habilidades visuales en el rendimiento deportivo en deportistas élite de raquetball Effect of the development of visual skills on the performance of racketball elite athletes

    Directory of Open Access Journals (Sweden)

    Agustín Fernández Sánchez

    2007-12-01

    and development of visual skills are related to sport performance. The objectives of this paper were to determine how the development of visual skills affects the sports performance of racketball players, to evaluate the development of these skills, to define the most developed ones, to compare them by sex and associate them with the sports achievements and with ocular and/ or systemic pathologies. Binocularity was the most developed skill. It was concluded that there was a higher number of racketball athletes presenting with developed visual skills, particularly binocularity, visual acuity (statics and dynamics and visualization. The most developed skills in females were visual acuity (statics and dynamics, accommodation-convergence, binocularity and visualization whereas males showed binocularity, visual acuity (statics and dynamics and visualization. In the case of the female racketball team, the highest development of visual skills was not associated with the best sport achievements, but in the case of the male team, this association did exist. No ocular or systemic pathologies related to the studied visual skills were found. It was recommended to perform this study in all the athletes at the beginning of their sports life, and also to set up a specialized ophthalmologic service for vision control, visual training to potentate the less developed skills in athletes and for regular check-ups.

  14. Impaired Driving Performance as Evidence of a Magnocellular Deficit in Dyslexia and Visual Stress.

    Science.gov (United States)

    Fisher, Carri; Chekaluk, Eugene; Irwin, Julia

    2015-11-01

    High comorbidity and an overlap in symptomology have been demonstrated between dyslexia and visual stress. Several researchers have hypothesized an underlying or causal influence that may account for this relationship. The magnocellular theory of dyslexia proposes that a deficit in visuo-temporal processing can explain symptomology for both disorders. If the magnocellular theory holds true, individuals who experience symptomology for these disorders should show impairment on a visuo-temporal task, such as driving. Eighteen male participants formed the sample for this study. Self-report measures assessed dyslexia and visual stress symptomology as well as participant IQ. Participants completed a drive simulation in which errors in response to road signs were measured. Bivariate correlations revealed significant associations between scores on measures of dyslexia and visual stress. Results also demonstrated that self-reported symptomology predicts magnocellular impairment as measured by performance on a driving task. Results from this study suggest that a magnocellular deficit offers a likely explanation for individuals who report high symptomology across both conditions. While conclusions about the impact of these disorders on driving performance should not be derived from this research alone, this study provides a platform for the development of future research, utilizing a clinical population and on-road driving assessment techniques. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Minimal effects of visual memory training on auditory performance of adult cochlear implant users.

    Science.gov (United States)

    Oba, Sandra I; Galvin, John J; Fu, Qian-Jie

    2013-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users' speech and music perception. However, it is unclear whether posttraining gains in performance were due to improved auditory perception or to generally improved attention, memory, and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory, were assessed in 10 CI users before, during, and after training with a nonauditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Posttraining gains were much smaller with the nonauditory VDS training than observed in previous auditory training studies with CI users. The results suggest that posttraining gains observed in previous studies were not solely attributable to improved attention or memory and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception.

  16. Super-resolution pupil filtering for visual performance enhancement using adaptive optics

    Science.gov (United States)

    Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun

    2018-05-01

    Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p vision optical correction of the human eye.

  17. Hierarchical and Matrix Structures in a Large Organizational Email Network: Visualization and Modeling Approaches

    OpenAIRE

    Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.

    2014-01-01

    This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...

  18. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  19. Visual Data Mining of Robot Performance Data, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design and develop VDM/RP, a visual data mining system that will enable analysts to acquire, store, query, analyze, and visualize recent and historical...

  20. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  1. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Visualization of landscape changes and threatening environmental processes using a digital landscape model

    International Nuclear Information System (INIS)

    Svatonova, H; Rybansky, M

    2014-01-01

    Visualizations supported by new geoinformation technologies prove to be appropriate tools for presenting and sharing the research results by professional and general public. The object of the research was to evaluate the benefits of visualizations for the nonexpert users. The subject of evaluation was: the success rate of interpreting the information; forming of a realistic idea of the unknown landscape; and the preference of the users during selection of the appropriate visualization for the purpose of solving the task. The tasks concerned: assessing the current situation and changes of the landscape; assessing the erosion in the landscape; and the ways of their visualizing. To prepare and process the landscape visualizations, it was necessary to select areas that allow tracking of land use changes and representative environmental processes. Then the digital landscape model was created and a number of visualizations were generated. The results of visualization testing show that the users prefer maps to orthophotos, they are able to formulate correct statements concerning the landscape with the help of visualizations, and that the simulated fly throughs represent a very suitable tool supporting formation of a realistic ideas about the landscape

  3. Visual field

    Science.gov (United States)

    ... your visual field. How the Test is Performed Confrontation visual field exam. This is a quick and ... to achieve this important distinction for online health information and services. Learn more about A.D.A. ...

  4. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    Science.gov (United States)

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Encoding color information for visual tracking: Algorithms and benchmark.

    Science.gov (United States)

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  6. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face

    Directory of Open Access Journals (Sweden)

    Atsuko eSaito

    2014-07-01

    Full Text Available The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13 were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task. Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  7. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models.

    Science.gov (United States)

    Hazarika, Subhashis; Biswas, Ayan; Shen, Han-Wei

    2018-01-01

    Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.

  8. Three-dimensional visual feature representation in the primary visual cortex.

    Science.gov (United States)

    Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2011-12-01

    In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of the same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a

  9. Visualization of protein folding funnels in lattice models.

    Directory of Open Access Journals (Sweden)

    Antonio B Oliveira

    Full Text Available Protein folding occurs in a very high dimensional phase space with an exponentially large number of states, and according to the energy landscape theory it exhibits a topology resembling a funnel. In this statistical approach, the folding mechanism is unveiled by describing the local minima in an effective one-dimensional representation. Other approaches based on potential energy landscapes address the hierarchical structure of local energy minima through disconnectivity graphs. In this paper, we introduce a metric to describe the distance between any two conformations, which also allows us to go beyond the one-dimensional representation and visualize the folding funnel in 2D and 3D. In this way it is possible to assess the folding process in detail, e.g., by identifying the connectivity between conformations and establishing the paths to reach the native state, in addition to regions where trapping may occur. Unlike the disconnectivity maps method, which is based on the kinetic connections between states, our methodology is based on structural similarities inferred from the new metric. The method was developed in a 27-mer protein lattice model, folded into a 3×3×3 cube. Five sequences were studied and distinct funnels were generated in an analysis restricted to conformations from the transition-state to the native configuration. Consistent with the expected results from the energy landscape theory, folding routes can be visualized to probe different regions of the phase space, as well as determine the difficulty in folding of the distinct sequences. Changes in the landscape due to mutations were visualized, with the comparison between wild and mutated local minima in a single map, which serves to identify different trapping regions. The extension of this approach to more realistic models and its use in combination with other approaches are discussed.

  10. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  11. Visual processing speed in old age.

    Science.gov (United States)

    Habekost, Thomas; Vogel, Asmus; Rostrup, Egill; Bundesen, Claus; Kyllingsbaek, Søren; Garde, Ellen; Ryberg, Charlotte; Waldemar, Gunhild

    2013-04-01

    Mental speed is a common concept in theories of cognitive aging, but it is difficult to get measures of the speed of a particular psychological process that are not confounded by the speed of other processes. We used Bundesen's (1990) Theory of Visual Attention (TVA) to obtain specific estimates of processing speed in the visual system controlled for the influence of response latency and individual variations of the perception threshold. A total of 33 non-demented old people (69-87 years) were tested for the ability to recognize briefly presented letters. Performance was analyzed by the TVA model. Visual processing speed decreased approximately linearly with age and was on average halved from 70 to 85 years. Less dramatic aging effects were found for the perception threshold and the visual apprehension span. In the visual domain, cognitive aging seems to be most clearly related to reductions in processing speed. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  12. Gambling in the visual periphery: a conjoint-measurement analysis of human ability to judge visual uncertainty.

    Directory of Open Access Journals (Sweden)

    Hang Zhang

    Full Text Available Recent work in motor control demonstrates that humans take their own motor uncertainty into account, adjusting the timing and goals of movement so as to maximize expected gain. Visual sensitivity varies dramatically with retinal location and target, and models of optimal visual search typically assume that the visual system takes retinal inhomogeneity into account in planning eye movements. Such models can then use the entire retina rather than just the fovea to speed search. Using a simple decision task, we evaluated human ability to compensate for retinal inhomogeneity. We first measured observers' sensitivity for targets, varying contrast and eccentricity. Observers then repeatedly chose between targets differing in eccentricity and contrast, selecting the one they would prefer to attempt: e.g., a low contrast target at 2° versus a high contrast target at 10°. Observers knew they would later attempt some of their chosen targets and receive rewards for correct classifications. We evaluated performance in three ways. Equivalence: Do observers' judgments agree with their actual performance? Do they correctly trade off eccentricity and contrast and select the more discriminable target in each pair? Transitivity: Are observers' choices self-consistent? Dominance: Do observers understand that increased contrast improves performance? Decreased eccentricity? All observers exhibited patterned failures of equivalence, and seven out of eight observers failed transitivity. There were significant but small failures of dominance. All these failures together reduced their winnings by 10%-18%.

  13. Impact of the motion and visual complexity of the background on players' performance in video game-like displays.

    Science.gov (United States)

    Caroux, Loïc; Le Bigot, Ludovic; Vibert, Nicolas

    2013-01-01

    The visual interfaces of virtual environments such as video games often show scenes where objects are superimposed on a moving background. Three experiments were designed to better understand the impact of the complexity and/or overall motion of two types of visual backgrounds often used in video games on the detection and use of superimposed, stationary items. The impact of background complexity and motion was assessed during two typical video game tasks: a relatively complex visual search task and a classic, less demanding shooting task. Background motion impaired participants' performance only when they performed the shooting game task, and only when the simplest of the two backgrounds was used. In contrast, and independently of background motion, performance on both tasks was impaired when the complexity of the background increased. Eye movement recordings demonstrated that most of the findings reflected the impact of low-level features of the two backgrounds on gaze control.

  14. Feasibility and performance evaluation of generating and recording visual evoked potentials using ambulatory Bluetooth based system.

    Science.gov (United States)

    Ellingson, Roger M; Oken, Barry

    2010-01-01

    Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.

  15. Combined acoustical and visual performance of noise barriers in mitigating the environmental impact of motorways.

    Science.gov (United States)

    Jiang, Like; Kang, Jian

    2016-02-01

    This study investigated the overall performance of noise barriers in mitigating environmental impact of motorways, taking into consideration their effects on reducing noise and visual intrusions of moving traffic, but also potentially inducing visual impact themselves. A laboratory experiment was carried out, using computer-visualised video scenes and motorway traffic noise recordings to present experimental scenarios covering two traffic levels, two distances of receiver to road, two types of background landscape, and five barrier conditions including motorway only, motorway with tree belt, motorways with 3 m timber barrier, 5m timber barrier, and 5m transparent barrier. Responses from 30 participants of university students were gathered and perceived barrier performance analysed. The results show that noise barriers were always beneficial in mitigating environmental impact of motorways, or made no significant changes in environmental quality when the impact of motorways was low. Overall, barriers only offered similar mitigation effect as compared to tree belt, but showed some potential to be more advantageous when traffic level went high. 5m timber barrier tended to perform better than the 3m one at the distance of 300 m but not at 100 m possibly due to its negative visual effect when getting closer. The transparent barrier did not perform much differently from the timber barriers but tended to be the least effective in most scenarios. Some low positive correlations were found between aesthetic preference for barriers and environmental impact reduction by the barriers. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Simulating the Role of Visual Selective Attention during the Development of Perceptual Completion

    Science.gov (United States)

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2012-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of…

  17. A model of selective visual attention for a stereo pair of images

    Science.gov (United States)

    Park, Min Chul; Kim, Sung Kyu; Son, Jung-Young

    2005-11-01

    Human visual attention system has a remarkable ability to interpret complex scenes with the ease and simplicity by selecting or focusing on a small region of visual field without scanning the whole images. In this paper, a novel selective visual attention model by using 3D image display system for a stereo pair of images is proposed. It is based on the feature integration theory and locates ROI(region of interest) or FOA(focus of attention). The disparity map obtained from a stereo pair of images is exploited as one of spatial visual features to form a set of topographic feature maps in our approach. Though the true human cognitive mechanism on the analysis and integration process might be different from our assumption the proposed attention system matches well with the results found by human observers.

  18. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    Science.gov (United States)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  19. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    Science.gov (United States)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  20. Discovery learning model with geogebra assisted for improvement mathematical visual thinking ability

    Science.gov (United States)

    Juandi, D.; Priatna, N.

    2018-05-01

    The main goal of this study is to improve the mathematical visual thinking ability of high school student through implementation the Discovery Learning Model with Geogebra Assisted. This objective can be achieved through study used quasi-experimental method, with non-random pretest-posttest control design. The sample subject of this research consist of 62 senior school student grade XI in one of school in Bandung district. The required data will be collected through documentation, observation, written tests, interviews, daily journals, and student worksheets. The results of this study are: 1) Improvement students Mathematical Visual Thinking Ability who obtain learning with applied the Discovery Learning Model with Geogebra assisted is significantly higher than students who obtain conventional learning; 2) There is a difference in the improvement of students’ Mathematical Visual Thinking ability between groups based on prior knowledge mathematical abilities (high, medium, and low) who obtained the treatment. 3) The Mathematical Visual Thinking Ability improvement of the high group is significantly higher than in the medium and low groups. 4) The quality of improvement ability of high and low prior knowledge is moderate category, in while the quality of improvement ability in the high category achieved by student with medium prior knowledge.

  1. Visual and cognitive predictors of performance on brake reaction test: Salisbury eye evaluation driving study.

    Science.gov (United States)

    Zhang, Lei; Baldwin, Kevin; Munoz, Beatriz; Munro, Cynthia; Turano, Kathleen; Hassan, Shirin; Lyketsos, Constantine; Bandeen-Roche, Karen; West, Sheila K

    2007-01-01

    Concern for driving safety has prompted research into understanding factors related to performance. Brake reaction speed (BRS), the speed with which persons react to a sudden change in driving conditions, is a measure of performance. Our aim is to determine the visual, cognitive, and physical factors predicting BRS in a population sample of 1425 older drivers. The Maryland Department of Motor Vehicles roster of persons aged 67-87 and residing in Salisbury, MD, was used for recruitment of the study population. Procedures included the following: habitual, binocular visual acuity using ETDRS charts, contrast sensitivity using a Pelli-Robson chart, visual fields assessed with a 81-point screening Humphrey field at a single intensity threshold, and a questionnaire to ascertain medical conditions. Cognitive status was assessed using a standard battery of tests for attention, memory, visuo-spatial, and scanning. BRS was assessed using a computer-driven device that measured separately the initial reaction speed (IRS) (from light change to red until removing foot from accelerator) and physical response speed (PRS) (removing foot from accelerator to full brake depression). Five trial times were averaged, and time was converted to speed. The median brake reaction time varied from 384 to 5688 milliseconds. Age, gender, and cognition predicted total BRS, a non-informative result as there are two distinct parts to the task. Once separated, decrease in IRS was associated with low scores on cognitive factors and missing points on the visual field. A decrease in PRS was associated with having three or more physical complaints related to legs and feet, and poorer vision search. Vision was not related to PRS. We have demonstrated the importance of segregating the speeds for the two tasks involved in brake reaction. Only the IRS depends on vision. Persons in good physical condition may perform poorly on brake reaction tests if their vision or cognition is compromised.

  2. Combined factors effect of menstrual cycle and background noise on visual inspection task performance: a simulation-based task.

    Science.gov (United States)

    Wijayanto, Titis; Tochihara, Yutaka; Wijaya, Andi R; Hermawati, Setia

    2009-11-01

    It is well known that women are physiologically and psychologically influenced by the menstrual cycle. In addition, the presence of background noise may affect task performance. So far, it has proven difficult to describe how the menstrual cycle and background noise affect task performance; some researchers have found an increment of performance during menstruation or during the presence of noise, others found performance deterioration, while other still have reported no dominant effect either of the menstrual cycle in performance or of the presence of noise. However, no study to date has investigated the combinational effect between the menstrual cycle and the presence of background noise in task performance. Therefore, the purpose of this study was to examine the combined factor effect of menstrual cycle and background noise on visual inspection task performance indices by Signal Detection Theory (SDT) metrics: sensitivity index (d') and response criteria index (beta). For this purpose, ten healthy female students (21.5+/-1.08 years) with a regular menstrual cycle participated in this study. A VDT-based visual inspection task was used for the experiment in 3x2 factorial designs. Two factors, menstrual phase, pre-menstruation (PMS), menstruation (M), and post-menstruation (PM) and background noise, with 80 dB(A) background noise and without noise, were analyzed as the main factors in this study. The results concluded that the sensitivity index (d') of SDT was affected in all the menstrual cycle conditions (pbackground noise (pbackground noise was found in this study. On the other hand, no significant effect was observed in the subject's tendency in visual inspection, shown by beta along the menstrual cycle and the presence of background noise. According to the response criteria for each individual subject, the presence of noise affected the tendency of some subjects in detecting the object and making decision during the visual inspection task.

  3. Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Michael L Waterston

    Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation (rTMS at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a "virtual lesion" in stimulated brain regions, with correspondingly diminished behavioral performance. METHODOLOGY/PRINCIPAL FINDINGS: Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. CONCLUSIONS/SIGNIFICANCE: Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception.

  4. Cerebral Glucose Metabolism is Associated with Verbal but not Visual Memory Performance in Community-Dwelling Older Adults.

    Science.gov (United States)

    Gardener, Samantha L; Sohrabi, Hamid R; Shen, Kai-Kai; Rainey-Smith, Stephanie R; Weinborn, Michael; Bates, Kristyn A; Shah, Tejal; Foster, Jonathan K; Lenzo, Nat; Salvado, Olivier; Laske, Christoph; Laws, Simon M; Taddei, Kevin; Verdile, Giuseppe; Martins, Ralph N

    2016-03-31

    Increasing evidence suggests that Alzheimer's disease (AD) sufferers show region-specific reductions in cerebral glucose metabolism, as measured by [18F]-fluoro-2-deoxyglucose positron emission tomography (18F-FDG PET). We investigated preclinical disease stage by cross-sectionally examining the association between global cognition, verbal and visual memory, and 18F-FDG PET standardized uptake value ratio (SUVR) in 43 healthy control individuals, subsequently focusing on differences between subjective memory complainers and non-memory complainers. The 18F-FDG PET regions of interest investigated include the hippocampus, amygdala, posterior cingulate, superior parietal, entorhinal cortices, frontal cortex, temporal cortex, and inferior parietal region. In the cohort as a whole, verbal logical memory immediate recall was positively associated with 18F-FDG PET SUVR in both the left hippocampus and right amygdala. There were no associations observed between global cognition, delayed recall in logical memory, or visual reproduction and 18F-FDG PET SUVR. Following stratification of the cohort into subjective memory complainers and non-complainers, verbal logical memory immediate recall was positively associated with 18F-FDG PET SUVR in the right amygdala in those with subjective memory complaints. There were no significant associations observed in non-memory complainers between 18F-FDG PET SUVR in regions of interest and cognitive performance. We observed subjective memory complaint-specific associations between 18F-FDG PET SUVR and immediate verbal memory performance in our cohort, however found no associations between delayed recall of verbal memory performance or visual memory performance. It is here argued that the neural mechanisms underlying verbal and visual memory performance may in fact differ in their pathways, and the characteristic reduction of 18F-FDG PET SUVR observed in this and previous studies likely reflects the pathophysiological changes in specific

  5. Boxes of Model Building and Visualization.

    Science.gov (United States)

    Turk, Dušan

    2017-01-01

    Macromolecular crystallography and electron microscopy (single-particle and in situ tomography) are merging into a single approach used by the two coalescing scientific communities. The merger is a consequence of technical developments that enabled determination of atomic structures of macromolecules by electron microscopy. Technological progress in experimental methods of macromolecular structure determination, computer hardware, and software changed and continues to change the nature of model building and visualization of molecular structures. However, the increase in automation and availability of structure validation are reducing interactive manual model building to fiddling with details. On the other hand, interactive modeling tools increasingly rely on search and complex energy calculation procedures, which make manually driven changes in geometry increasingly powerful and at the same time less demanding. Thus, the need for accurate manual positioning of a model is decreasing. The user's push only needs to be sufficient to bring the model within the increasing convergence radius of the computing tools. It seems that we can now better than ever determine an average single structure. The tools work better, requirements for engagement of human brain are lowered, and the frontier of intellectual and scientific challenges has moved on. The quest for resolution of new challenges requires out-of-the-box thinking. A few issues such as model bias and correctness of structure, ongoing developments in parameters defining geometric restraints, limitations of the ideal average single structure, and limitations of Bragg spot data are discussed here, together with the challenges that lie ahead.

  6. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  7. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  8. Large Scale Topographic Maps Generalisation and Visualization Based on New Methodology

    OpenAIRE

    Dinar, Ilma; Ključanin, Slobodanka; Poslončec-Petrić, Vesna

    2015-01-01

    Integrating spatial data from different sources results in visualization which is the last step in the process of digital basic topographic maps creation. Sources used for visualization are existing real estate cadastre database orthophoto plans and digital terrain models. Analogue cadastre plans were scanned and georeferenced according to existing regulations and used for toponyms. Visualization of topologically inspected geometric primitives was performed based on the ''Collection of cartog...

  9. VISIBIOweb: visualization and layout services for BioPAX pathway models

    Science.gov (United States)

    Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur

    2010-01-01

    With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470

  10. Deficits in vision and visual attention associated with motor performance of very preterm/very low birth weight children.

    Science.gov (United States)

    Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G

    2016-01-01

    To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Building Envelope Thermal Performance Assessment Using Visual Programming and BIM, based on ETTV requirement of Green Mark and GreenRE

    Directory of Open Access Journals (Sweden)

    Taki Eddine Seghier

    2017-09-01

    Full Text Available Accomplishment of green building design requirements and the achievement of the targeted credit points under a specific green rating system are known to be a task that is very challenging. Building Information Modeling (BIM design process and tools have already made considerable advancements in green building design and performance analysis. However, Green building design process is still lack of tools and workflows that can provide real-time feedback of building sustainability and rating during the design stage. In this paper, a new workflow of green building design assessment and rating is proposed based on the integration of Visual Programing Language (VPL and BIM. Thus, the aim of this study is to develop a BIM-VPL based tool for building envelope design and assessment support. The focus performance metric in this research is building Envelope Thermal Transfer Value (ETTV which is an Energy Efficiency (EE prerequisite requirement (up to 15 credits in both Green Mark and GreenRE rating systems. The development of the tool begins first by creating a generic integration framework between BIM-VPL functionalities and ETTV requirements. Then, data is extracted from the BIM 3D model and managed using Revit, Excel and Dynamo for visual scripting. A sample project consisting of a hypothetical residential building is run and its envelope ETTV performance and rating score are obtained for the validation of the tool. This tool will support project team in building envelope design and assessment by allowing them to select the most appropriate façade configuration according to its performance efficiency and the green rating. Furthermore, this tool serves as proof of concept that building sustainability rating and compliance checking can be automatically processed through customized workflows developed based on BIM and VPL technologies.

  12. Mental Imagery and Visual Working Memory

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024

  13. Mental imagery and visual working memory.

    Directory of Open Access Journals (Sweden)

    Rebecca Keogh

    Full Text Available Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  14. Mental imagery and visual working memory.

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  15. An Investigation into how Character’s Visual Appearance Affects Gamer Performance

    OpenAIRE

    Leppänen, Janne

    2017-01-01

    The purpose of this thesis was to investigate how a gamer perceives the playable character’s visual looks in an FPS (first person shooter) type of computer game. The main focus was on how this affects immersion and especially the gamer performance while playing the game. The goal was also to explain the concept of enclothed cognition and its use in the thesis to support the research. A series of tests was set up for multiple test subjects with the purpose of proving that the playable...

  16. Interactive WebGL-based 3D visualizations for EAST experiment

    International Nuclear Information System (INIS)

    Xia, J.Y.; Xiao, B.J.; Li, Dan; Wang, K.R.

    2016-01-01

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  17. Interactive WebGL-based 3D visualizations for EAST experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, K.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  18. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    Science.gov (United States)

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the

  19. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  20. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  1. Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework.

    Science.gov (United States)

    Lin, Yi-Shin; Heinke, Dietmar; Humphreys, Glyn W

    2015-04-01

    In this study, we applied Bayesian-based distributional analyses to examine the shapes of response time (RT) distributions in three visual search paradigms, which varied in task difficulty. In further analyses we investigated two common observations in visual search-the effects of display size and of variations in search efficiency across different task conditions-following a design that had been used in previous studies (Palmer, Horowitz, Torralba, & Wolfe, Journal of Experimental Psychology: Human Perception and Performance, 37, 58-71, 2011; Wolfe, Palmer, & Horowitz, Vision Research, 50, 1304-1311, 2010) in which parameters of the response distributions were measured. Our study showed that the distributional parameters in an experimental condition can be reliably estimated by moderate sample sizes when Monte Carlo simulation techniques are applied. More importantly, by analyzing trial RTs, we were able to extract paradigm-dependent shape changes in the RT distributions that could be accounted for by using the EZ2 diffusion model. The study showed that Bayesian-based RT distribution analyses can provide an important means to investigate the underlying cognitive processes in search, including stimulus grouping and the bottom-up guidance of attention.

  2. Visual and kinesthetic locomotor imagery training integrated with auditory step rhythm for walking performance of patients with chronic stroke.

    Science.gov (United States)

    Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk

    2011-02-01

    To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.

  3. Artist-Teachers' In-Action Mental Models While Teaching Visual Arts

    Science.gov (United States)

    Russo-Zimet, Gila

    2017-01-01

    Studies have examined the assumption that teachers have previous perceptions, beliefs and knowledge about learning (Cochran-Smith & Villegas, 2015). This study presented the In-Action Mental Model of twenty leading artist-teachers while teaching Visual Arts in three Israeli art institutions of higher Education. Data was collected in two…

  4. Assessment of three-dimensional high-definition visualization technology to perform microvascular anastomosis.

    Science.gov (United States)

    Wong, Alex K; Davis, Gabrielle B; Nguyen, T JoAnna; Hui, Kenneth J W S; Hwang, Brian H; Chan, Linda S; Zhou, Zhao; Schooler, Wesley G; Chandrasekhar, Bala S; Urata, Mark M

    2014-07-01

    Traditional visualization techniques in microsurgery require strict positioning in order to maintain the field of visualization. However, static posturing over time may lead to musculoskeletal strain and injury. Three-dimensional high-definition (3DHD) visualization technology may be a useful adjunct to limiting static posturing and improving ergonomics in microsurgery. In this study, we aimed to investigate the benefits of using the 3DHD technology over traditional techniques. A total of 14 volunteers consisting of novice and experienced microsurgeons performed femoral anastomoses on male Sprague-Dawley retired breeder rats using traditional techniques as well as the 3DHD technology and compared the two techniques. Participants subsequently completed a questionnaire regarding their preference in terms of operational parameters, ergonomics, overall quality, and educational benefits. Efficiency was also evaluated by mean times to complete the anastomosis with each technique. A total of 27 anastomoses were performed, 14 of 14 using the traditional microscope and 13 of 14 using the 3DHD technology. Preference toward the traditional modality was noted with respect to the parameters of precision, field adjustments, zoom and focus, depth perception, and overall quality. The 3DHD technique was preferred for improved stamina and less back and eye strain. Participants believed that the 3DHD technique was the better method for learning microsurgery. Longer mean time of anastomosis completion was noted in participants utilizing the 3DHD technique. The 3DHD technology may prove to be valuable in improving proper ergonomics in microsurgery. In addition, it may be useful in medical education when applied to the learning of new microsurgical skills. More studies are warranted to determine its efficacy and safety in a clinical setting. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Improving the Audio Game-Playing Performances of People with Visual Impairments through Multimodal Training

    Science.gov (United States)

    Balan, Oana; Moldoveanu, Alin; Moldoveanu, Florica; Nagy, Hunor; Wersenyi, Gyorgy; Unnporsson, Runar

    2017-01-01

    Introduction: As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory…

  6. Poor Performance on Serial Visual Tasks in Persons with Reading Disabilities: Impaired Working Memory?

    Science.gov (United States)

    Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.

    2008-01-01

    The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…

  7. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD.

    Science.gov (United States)

    Simões, Eunice N; Carvalho, Ana L Novais; Schmidt, Sergio L

    2018-04-01

    Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.

  8. Clinical performance of two visual scoring systems in detecting and assessing activity status of occlusal caries in primary teeth

    DEFF Research Database (Denmark)

    Braga, M M; Ekstrand, K R; Martignon, S

    2010-01-01

    This study aimed to compare the clinical performance of two sets of visual scoring criteria for detecting caries severity and assessing caries activity status in occlusal surfaces. Two visual scoring systems--the Nyvad criteria (NY) and the ICDAS-II including an adjunct system for lesion activity...

  9. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    Science.gov (United States)

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  10. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  11. Testing the generality of the zoom-lens model: Evidence for visual-pathway specific effects of attended-region size on perception.

    Science.gov (United States)

    Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark

    2017-05-01

    There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.

  12. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  13. Individual Differences in Reported Visual Imagery and Memory Performance.

    Science.gov (United States)

    McKelvie, Stuart J.; Demers, Elizabeth G.

    1979-01-01

    High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)

  14. Visual product architecture modelling for structuring data in a PLM system

    DEFF Research Database (Denmark)

    Bruun, Hans Peter Lomholt; Mortensen, Niels Henrik

    2012-01-01

    The goal of this paper is to determine the role of a product architecture model to support communication and to form the basis for developing and maintaining information of product structures in a PLM system. This paper contains descriptions of a modelling tool to represent a product architecture....... Moreover, it is discussed how the sometimes intangible elements and phenomena within an architecture model can be visually modeled in order to form the basis for a data model in a PLM system. © 2012 International Federation for Information Processing....

  15. VisTrails SAHM: visualization and workflow management for species habitat modeling

    Science.gov (United States)

    Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.

    2013-01-01

    The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.

  16. IFIS Model-Plus: A Web-Based GUI for Visualization, Comparison and Evaluation of Distributed Flood Forecasts and Hindcasts

    Science.gov (United States)

    Krajewski, W. F.; Della Libera Zanchetta, A.; Mantilla, R.; Demir, I.

    2017-12-01

    This work explores the use of hydroinformatics tools to provide an user friendly and accessible interface for executing and assessing the output of realtime flood forecasts using distributed hydrological models. The main result is the implementation of a web system that uses an Iowa Flood Information System (IFIS)-based environment for graphical displays of rainfall-runoff simulation results for both real-time and past storm events. It communicates with ASYNCH ODE solver to perform large-scale distributed hydrological modeling based on segmentation of the terrain into hillslope-link hydrologic units. The cyber-platform also allows hindcast of model performance by testing multiple model configurations and assumptions of vertical flows in the soils. The scope of the currently implemented system is the entire set of contributing watersheds for the territory of the state of Iowa. The interface provides resources for visualization of animated maps for different water-related modeled states of the environment, including flood-waves propagation with classification of flood magnitude, runoff generation, surface soil moisture and total water column in the soil. Additional tools for comparing different model configurations and performing model evaluation by comparing to observed variables at monitored sites are also available. The user friendly interface has been published to the web under the URL http://ifis.iowafloodcenter.org/ifis/sc/modelplus/.

  17. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  18. Social media mining and visualization for point-of-interest recommendation

    Institute of Scientific and Technical Information of China (English)

    Ren Xingyi; Song Meina; E Haihong; Song Junde

    2017-01-01

    With the rapid growth of location-based social networks (LBSNs),point-of-interest (POI) recommendation has become an important research problem.As one of the most representative social media platforms,Twitter provides various real-life information for POI recommendation in real time.Despite that POI recommendation has been actively studied,tweet images have not been well utilized for this research problem.State-of-the-art visual features like convolutional neural network (CNN) features have shown significant performance gains over the traditional bag-of-visual-words in unveiling the image's semantics.Unfortunately,they have not been employed for POI recommendation from social websites.Hence,how to make the most of tweet images to improve the performance of POI recommendation and visualization remains open.In this paper,we thoroughly study the impact of tweet images on POI recommendation for different POI categories using various visual features.A novel topic model called social media Twitter-latent Dirichlet allocation (SM-TwitterLDA) which jointly models five Twitter features,(i.e.,text,image,location,timestamp and hashtag) is designed to discover POIs from the sheer amount of tweets.Moreover,each POI is visualized by representative images selected on three predefined criteria.Extensive experiments have been conducted on a real-life tweet dataset to verify the effectiveness of our method.

  19. Neural markers of age-related reserve and decline in visual processing speed and visual short-term memory capacity

    DEFF Research Database (Denmark)

    Wiegand, Iris

    2013-01-01

    Attentional performance is assumed to be a major source of general cognitive abilities in older age. The present study aimed at identifying neural markers of preserved and declined basic visual attention functions in aging individuals. For groups of younger and older adults, we modeled general ca...

  20. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  1. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  2. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  3. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    Science.gov (United States)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as

  4. Attentional and visual demands for sprint performance in non-fatigued and fatigued conditions: reliability of a repeated sprint test

    Directory of Open Access Journals (Sweden)

    Diercks Ron L

    2010-05-01

    Full Text Available Abstract Background Physical performance measures are widely used to assess physical function, providing information about physiological and biomechanical aspects of motor performance. However they do not provide insight into the attentional and visual demands for motor performance. A figure-of-eight sprint test was therefore developed to measure the attentional and visual demands for repeated-sprint performance. The aims of the study were: 1 to assess test-retest reliability of the figure-of-eight sprint test, and 2 to study the attentional and visual demands for sprint performance in a non-fatigued and fatigued condition. Methods Twenty-seven healthy athletes were included in the study. To determine test-retest reliability, a subgroup of 19 athletes performed the figure-of-eight sprint test twice. The figure-of-eight sprint test consisted of nine 30-second sprints. The sprint test consisted of three test parts: sprinting without any restriction, with an attention-demanding task, and with restricted vision. Increases in sprint times with the attention-demanding task or restricted vision are reflective of the attentional and visual demands for sprinting. Intraclass correlation coefficients (ICCs and mean difference between test and retest with 95% confidence limits (CL were used to assess test-retest reliability. Repeated-measures ANOVA were used for comparisons between the sprint times and fatigue measurements of the test parts in both a non-fatigued and fatigued condition. Results The figure-of-eight sprint test showed good test-retest reliability, with ICCs ranging from 0.75 to 0.94 (95% CL: 0.40-0.98. Zero lay within the 95% CL of the mean differences, indicating that no bias existed between sprint performance at test and retest. Sprint times during the test parts with attention-demanding task (P = 0.01 and restricted vision (P Conclusions High ICCs and the absence of systematic variation indicate good test-retest reliability of the figure

  5. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  6. NCL - a workhorse for data analysis and visualization in climate research

    Science.gov (United States)

    Meier-Fleischer, Karin; Boettinger, Michael; Haley, Mary

    2015-04-01

    Coupled earth system models are used for simulating the climate system. In the context of international climate assessment and model intercomparison projects, extensive simulation data sets are produced and have to be analyzed. Supercomputers and storage systems are used over years to perform the simulations, but the data analysis usually takes even more time. Different classes of tools are used for the analysis and visualization of these big data sets. In this PICO, we focus on NCL (NCAR Command Language), an interpreted language developed at the National Center for Atmospheric Research in Boulder, Colorado. NCL allows performing standard analysis operations and producing graphical output in batch mode loosely coupled with the simulations. Thus, for visual monitoring of their simulations, many of DKRZ's users have integrated NCL into their modeling workflows. We present application examples from the tutorial we have developed that focus on typical visualizations of climate model data. Since NCL supports rectilinear, curvilinear and even unstructured grids, it is well prepared to facilitate the visualization of today's climate model data without prior interpolation. NCL includes many features common to modern programming languages, such as types, variables, operators, expressions, conditional statements, loops, and functions and procedures. It provides more than 600 built-in functions specifically for climate model data, facilitating analysis of scalar and vector quantities as well as numerous state-of-the-art 2D visualization methods (contour lines, filled areas, markers, wind arrows or barbs, weather symbols and many more). Important for Earth scientists is also NCL's capability to display data together with the corresponding map background and a choice of the map projection.

  7. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    Science.gov (United States)

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  8. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  9. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  10. Visual cognition

    Science.gov (United States)

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  11. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Comparison between visual half-field performance and cerebral blood flow changes as indicators of language dominance.

    Science.gov (United States)

    Krach, S; Chen, L M; Hartje, W

    2006-03-01

    The determination of hemispheric language dominance (HLD) can be accomplished in two ways. One approach relies on hemispheric differences in cerebral blood flow velocity (CBFV) changes during language activity, while the other approach makes use of performance differences between the left and right visual field when verbal stimuli are presented in a tachistoscopic visual field paradigm. Since both methodologically different approaches claim to assess functional HLD, it seems plausible to expect that the respective laterality indices (LI) would correspond. To test this expectation we measured language lateralisation in 58 healthy right-handed, left-handed, and ambidextrous subjects with both approaches. CBFV changes were recorded with functional transcranial Doppler sonography (fTCD). We applied a lexical decision task with bilateral visual field presentation of abstract nouns and, in addition, a task of mental word generation. In the lexical decision task, a highly significant right visual field advantage was observed for number of correct responses and reaction times, while at the same time and contrary to expectation the increase of CBFV was significantly higher in the right than left hemisphere. During mental word generation, the acceleration of CBF was significantly higher in the left hemisphere. A comparison between individual LI derived from CBF measurement during mental word generation and from visual field performances in the lexical decision task showed a moderate correspondence in classifying the subjects' HLD. However, the correlation between the corresponding individual LI was surprisingly low and not significant. The results are discussed with regard to the issue of a limited reliability of behavioural LI on the one hand and the possibility of a fundamental difference between the behavioural and the physiological indicators of laterality on the other hand.

  13. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    Science.gov (United States)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  14. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  15. Performance of the Sellick maneuver significantly improves when residents and trained nurses use a visually interactive guidance device in simulation

    International Nuclear Information System (INIS)

    Connor, Christopher W; Saffary, Roya; Feliz, Eddy

    2013-01-01

    We examined the proper performance of the Sellick maneuver, a maneuver used to reduce the risk of aspiration of stomach contents during induction of general anesthesia, using a novel device that measures and visualizes the force applied to the cricoid cartilage using thin-film force sensitive resistors in a form suitable for in vivo use. Performance was tested in three stages with twenty anaesthesiology residents and twenty trained operating room nurses. Firstly, subjects applied force to the cricoid cartilage as was customary to them. Secondly, subjects used the device to guide the application of that force. Thirdly, subjects were again asked to perform the manoeuvre without visual guidance. Each test lasted 1 min and the amount of force applied was measured throughout. Overall, the Sellick maneuver was often not applied properly, with large variance between individual subjects. Performance and inter-subject consistency improved to a very highly significant degree when subjects were able to use the device as a visual guide (p < 0.001). Subsequent significant improvements in performances during the last, unguided test demonstrated that the device initiated learning. (paper)

  16. Performance of the Sellick maneuver significantly improves when residents and trained nurses use a visually interactive guidance device in simulation

    Energy Technology Data Exchange (ETDEWEB)

    Connor, Christopher W; Saffary, Roya; Feliz, Eddy [Department of Anesthesiology Boston Medical Center, Boston, MA (United States)

    2013-12-15

    We examined the proper performance of the Sellick maneuver, a maneuver used to reduce the risk of aspiration of stomach contents during induction of general anesthesia, using a novel device that measures and visualizes the force applied to the cricoid cartilage using thin-film force sensitive resistors in a form suitable for in vivo use. Performance was tested in three stages with twenty anaesthesiology residents and twenty trained operating room nurses. Firstly, subjects applied force to the cricoid cartilage as was customary to them. Secondly, subjects used the device to guide the application of that force. Thirdly, subjects were again asked to perform the manoeuvre without visual guidance. Each test lasted 1 min and the amount of force applied was measured throughout. Overall, the Sellick maneuver was often not applied properly, with large variance between individual subjects. Performance and inter-subject consistency improved to a very highly significant degree when subjects were able to use the device as a visual guide (p < 0.001). Subsequent significant improvements in performances during the last, unguided test demonstrated that the device initiated learning. (paper)

  17. Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition

    OpenAIRE

    Villeneuve, Jérôme; Cadoz, Claude; Castagné, Nicolas

    2015-01-01

    The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models an...

  18. The "Carbon Data Explorer": Web-Based Space-Time Visualization of Modeled Carbon Fluxes

    Science.gov (United States)

    Billmire, M.; Endsley, K. A.

    2014-12-01

    The visualization of and scientific "sense-making" from large datasets varying in both space and time is a challenge; one that is still being addressed in a number of different fields. The approaches taken thus far are often specific to a given academic field due to the unique questions that arise in different disciplines, however, basic approaches such as geographic maps and time series plots are still widely useful. The proliferation of model estimates of increasing size and resolution further complicates what ought to be a simple workflow: Model some geophysical phenomen(on), obtain results and measure uncertainty, organize and display the data, make comparisons across trials, and share findings. A new tool is in development that is intended to help scientists with the latter parts of that workflow. The tentatively-titled "Carbon Data Explorer" (http://spatial.mtri.org/flux-client/) enables users to access carbon science and related spatio-temporal science datasets over the web. All that is required to access multiple interactive visualizations of carbon science datasets is a compatible web browser and an internet connection. While the application targets atmospheric and climate science datasets, particularly spatio-temporal model estimates of carbon products, the software architecture takes an agnostic approach to the data to be visualized. Any atmospheric, biophysical, or geophysical quanity that varies in space and time, including one or more measures of uncertainty, can be visualized within the application. Within the web application, users have seamless control over a flexible and consistent symbology for map-based visualizations and plots. Where time series data are represented by one or more data "frames" (e.g. a map), users can animate the data. In the "coordinated view," users can make direct comparisons between different frames and different models or model runs, facilitating intermodal comparisons and assessments of spatio-temporal variability. Map

  19. Visual defects in a mouse model of fetal alcohol spectrum disorder.

    Science.gov (United States)

    Lantz, Crystal L; Pulimood, Nisha S; Rodrigues-Junior, Wandilson S; Chen, Ching-Kang; Manhaes, Alex C; Kalatsky, Valery A; Medina, Alexandre Esteves

    2014-01-01

    Alcohol consumption during pregnancy can lead to a multitude of neurological problems in offspring, varying from subtle behavioral changes to severe mental retardation. These alterations are collectively referred to as Fetal Alcohol Spectrum Disorders (FASD). Early alcohol exposure can strongly affect the visual system and children with FASD can exhibit an amblyopia-like pattern of visual acuity deficits even in the absence of optical and oculomotor disruption. Here, we test whether early alcohol exposure can lead to a disruption in visual acuity, using a model of FASD to mimic alcohol consumption in the last months of human gestation. To accomplish this, mice were exposed to ethanol (5 g/kg i.p.) or saline on postnatal days (P) 5, 7, and 9. Two to three weeks later we recorded visually evoked potentials to assess spatial frequency detection and contrast sensitivity, conducted electroretinography (ERG) to further assess visual function and imaged retinotopy using optical imaging of intrinsic signals. We observed that animals exposed to ethanol displayed spatial frequency acuity curves similar to controls. However, ethanol-treated animals showed a significant deficit in contrast sensitivity. Moreover, ERGs revealed a market decrease in both a- and b-waves amplitudes, and optical imaging suggest that both elevation and azimuth maps in ethanol-treated animals have a 10-20° greater map tilt compared to saline-treated controls. Overall, our findings suggest that binge alcohol drinking restricted to the last months of gestation in humans can lead to marked deficits in visual function.

  20. Visual deficits in a mouse model of Fetal alcohol spectrum disorders

    Directory of Open Access Journals (Sweden)

    Crystal L Lantz

    2014-10-01

    Full Text Available Alcohol consumption during pregnancy can lead to a multitude of neurological problems in offspring, varying from subtle behavioral changes to severe mental retardation. These alterations are collectively referred to as Fetal Alcohol Spectrum Disorders (FASD. Early alcohol exposure can strongly affect the visual system and children with FASD can exhibit an amblyopia-like pattern of visual acuity deficits even in the absence of optical and oculormotor disruption.Here we test whether early alcohol exposure can lead to a disruption in visual acuity, using a model of FASD to mimic alcohol consumption in the last months of human gestation. To accomplish this, mice were exposed to ethanol (5g/kg i.p or saline on postnatal days (P 5, 7 and 9. Two to three weeks later we recorded visually evoked potentials (VEPs to assess spatial frequency detection and contrast sensitivity, conducted electroretinography (ERGs to further assess visual function and imaged retinotopy using optical imaging of intrinsic signals. We observed that animals exposed to ethanol displayed spatial frequency acuity curves similar to controls. However, ethanol-treated animals showed a significant deficit in contrast sensitivity. Moreover, ERGs revealed a market decrease in both a- and b- waves amplitudes, and optical imaging suggest that both elevation and azimuth maps in ethanol-treated animals have a 10-20o greater map tilt compared to saline-treated controls. Overall, our findings suggest that binge alcohol drinking restricted to the last months of gestation in humans can lead to marked deficits in visual function.