WorldWideScience

Sample records for visual performance model

  1. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    Science.gov (United States)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  2. A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions.

    Science.gov (United States)

    Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M

    2017-03-01

    This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.

  3. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  4. From visual performance to visual ergonomics: A personal historic view

    NARCIS (Netherlands)

    Vos, J.J.

    2009-01-01

    During the author's active time in vision research a change in attitude took place from 'visual performance' as a criterion to justify higher light levels, to 'visual ergonomics' as a more comprehensive approach to improve visual work conditions. Some personal memories of this transition period may

  5. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    Science.gov (United States)

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  6. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    Science.gov (United States)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  7. Peripheral visual performance enhancement by neurofeedback training.

    Science.gov (United States)

    Nan, Wenya; Wan, Feng; Lou, Chin Ian; Vai, Mang I; Rosa, Agostinho

    2013-12-01

    Peripheral visual performance is an important ability for everyone, and a positive inter-individual correlation is found between the peripheral visual performance and the alpha amplitude during the performance test. This study investigated the effect of alpha neurofeedback training on the peripheral visual performance. A neurofeedback group of 13 subjects finished 20 sessions of alpha enhancement feedback within 20 days. The peripheral visual performance was assessed by a new dynamic peripheral visual test on the first and last training day. The results revealed that the neurofeedback group showed significant enhancement of the peripheral visual performance as well as the relative alpha amplitude during the peripheral visual test. It was not the case in the non-neurofeedback control group, which performed the tests within the same time frame as the neurofeedback group but without any training sessions. These findings suggest that alpha neurofeedback training was effective in improving peripheral visual performance. To the best of our knowledge, this is the first study to show evidence for performance improvement in peripheral vision via alpha neurofeedback training.

  8. Visual Intelligent Robot Performance Monitor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a Visual Intelligent Robot Performance Monitor (VIRPM) that will help crew members maintain situation awareness of robot performance more...

  9. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....

  10. Performance Visualization for Hearing-Impaired Students

    Directory of Open Access Journals (Sweden)

    Rumi Hiraga

    2005-10-01

    Full Text Available We have been teaching computer music to hearing impaired students of Tsukuba College of Technology for six years. Although students have hearing difficulties, almost all of them show an interest in music. Thus, this has been a challenging class to turn their weakness into enjoyment. We thought that performance visualization is a good method for them to keep their interest in music and try cooperative performances with others. In this paper, we describe our computer music class and the result of our preliminary experiment on the effectiveness of visual assistance. Though it was not a complete experiment with a sufficient number of subjects, the result showed that the show-ahead and selected-note-only types of performance visualization were necessary according to the purpose of the visual aid.

  11. High performance visual display for HENP detectors

    CERN Document Server

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  12. Modular target acquisition model & visualization tool

    NARCIS (Netherlands)

    Bijl, P.; Hogervorst, M.A.; Vos, W.K.

    2008-01-01

    We developed a software framework for image-based simulation models in the chain: scene-atmosphere-sensor-image enhancement-display-human observer: EO-VISTA. The goal is to visualize the steps and to quantify (Target Acquisition) task performance. EO-VISTA provides an excellent means to

  13. Enhanced visual performance in obsessive compulsive personality disorder.

    Science.gov (United States)

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  14. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  15. Heart Performance Determination by Visualization in Larval Fishes: Influence of Alternative Models for Heart Shape and Volume

    Directory of Open Access Journals (Sweden)

    Prescilla Perrichon

    2017-07-01

    Full Text Available Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output.

  16. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  17. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  18. Empathy and visual perspective-taking performance.

    Science.gov (United States)

    Mattan, Bradley D; Rotshtein, Pia; Quinn, Kimberly A

    2016-01-01

    This study examined the extent to which visual perspective-taking performance is modulated by trait-level empathy. Participants completed a third-person visual perspective-taking task in which they judged the perspectives of two simultaneously presented avatars, designated "Self" and "Other." Depending on the trial, these avatars either held the same view (i.e., congruent) or a different view (i.e., incongruent). Analyses focused on the relationship between empathy and two perspective-taking phenomena: Selection between competing perspectives (i.e., perspective-congruence effects) and prioritization of the Self avatar's perspective. Empathy was related to improved overall performance on this task and a reduced cost of selecting between conflicting perspectives (i.e., smaller perspective-congruence effects). This effect was asymmetric, with empathy (i.e., empathic concern) levels predicting reduced interference from a conflicting perspective, especially when adopting the Self (vs. Other) avatar's perspective. Taken together, these results highlight the importance of the self-other distinction and mental flexibility components of empathy.

  19. A model for visual memory encoding.

    Directory of Open Access Journals (Sweden)

    Rodolphe Nenert

    Full Text Available Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA. All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN. Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  20. A model for visual memory encoding.

    Science.gov (United States)

    Nenert, Rodolphe; Allendorfer, Jane B; Szaflarski, Jerzy P

    2014-01-01

    Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  1. What Research Says About: Visual Attributes and Skilled Motor Performance.

    Science.gov (United States)

    Isaacs, Larry D.

    Dynamic visual acuity (DVA) is defined as the performer's ability to visually discriminate parts of an object when there is relative motion between the target and the performer. According to research findings, this visual attribute may play a key role in motor-task performance. Researchers have found a significant relationship between DVA and…

  2. Gambling on visual performance: neural correlates of metacognitive choice between visual lotteries

    Science.gov (United States)

    Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.

    2015-01-01

    A lottery is a list of mutually exclusive outcomes together with their associated probabilities of occurrence. Decision making is often modeled as choices between lotteries and—in typical research on decision under risk—the probabilities are given to the subject explicitly in numerical form. In this study, we examined lottery decision task where the probabilities of receiving various rewards are contingent on the subjects' own visual performance in a random-dot-motion (RDM) discrimination task, a metacognitive or second order judgment. While there is a large literature concerning the RDM task and there is also a large literature on decision under risk, little is known about metacognitive decisions when the source of uncertainty is visual. Using fMRI with humans, we found distinct fronto-striatal and fronto-parietal networks representing subjects' estimates of his or her performance, reward value, and the expected value (EV) of the lotteries. The fronto-striatal network includes the dorsomedial prefrontal cortex and the ventral striatum, involved in reward processing and value-based decision-making. The fronto-parietal network includes the intraparietal sulcus and the ventrolateral prefrontal cortex, which was shown to be involved in the accumulation of sensory evidence during visual decision making and in metacognitive judgments on visual performance. These results demonstrate that—while valuation of performance-based lotteries involves a common fronto-striatal valuation network—an additional network unique to the estimation of task-related performance is recruited for the integration of probability and reward information when probability is inferred from visual performance. PMID:26388724

  3. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...... show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...

  4. High Performance Visualization using Query-Driven Visualizationand Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Campbell, Scott; Dart, Eli; Shalf, John; Stockinger, Kurt; Wu, Kesheng

    2006-06-15

    Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.

  5. Relationship between student profile, tool use, participation, and academic performance with the use of Augmented Reality technology for visualized architecture models

    OpenAIRE

    Fonseca Escudero, David; Martí Audí, Nuria; Redondo Domínguez, Ernesto; Navarro Delgado, Isidro; Sánchez Riera, Alberto

    2014-01-01

    In this study, we describe the implementation and evaluation of an experiment with Augmented Reality (AR) technology in the visualization of 3D models and the presentation of architectural projects by students of architecture and building engineering. The proposal is based on the premise that the technology used in AR, such as mobile devices, is familiar to the student. When used in a collaborative manner, the technology is able to achieve a greater level of direct engagement with the propose...

  6. Modeling human comprehension of data visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Matzen, Laura E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Haass, Michael Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Divis, Kristin Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Andrew T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.

  7. Visual fatigue modeling and analysis for stereoscopic video

    Science.gov (United States)

    Choi, Jaeseob; Kim, Donghyun; Choi, Sunghwan; Sohn, Kwanghoon

    2012-01-01

    In this paper, we propose a visual fatigue prediction method for stereoscopic video. We select visual fatigue factor candidates and determine the equations for each. The candidates are then classified into their principal components, and the validity of each is confirmed using principal component analysis. Visual fatigue is predicted using multiple regression with subjective visual fatigue. In order to determine the best model, we select the visual fatigue factors that have sufficient significance in terms of subjective fatigue according to the stepwise method. The predicted visual fatigue score is presented as a linear combination of the selected visual fatigue factors. Consequently, the proposed algorithm provides more reliable performance in terms of correlation with the subjective test results compared with a conventional algorithm.

  8. The mysterious cognitive abilities of bees: why models of visual processing need to consider experience and individual differences in animal performance.

    Science.gov (United States)

    Dyer, Adrian G

    2012-02-01

    Vision is one of the most important modalities for the remote perception of biologically important stimuli. Insects like honeybees and bumblebees use their colour and spatial vision to solve tasks, such as navigation, or to recognise rewarding flowers during foraging. Bee vision is one of the most intensively studied animal visual systems, and several models have been developed to describe its function. These models have largely assumed that bee vision is determined by mechanistic hard-wired circuits, with little or no consideration for behavioural plasticity or cognitive factors. However, recent work on both bee colour vision and spatial vision suggests that cognitive factors are indeed a very significant factor in determining what a bee sees. Individual bumblebees trade-off speed for accuracy, and will decide on which criteria to prioritise depending upon contextual information. With continued visual experience, honeybees can learn to use non-elemental processing, including configural mechanisms and rule learning, and can access top-down information to enhance learning of sophisticated, novel visual tasks. Honeybees can learn delayed-matching-to-sample tasks and the rules governing this decision making, and even transfer learned rules between different sensory modalities. Finally, bees can learn complex categorisation tasks and display numerical processing abilities for numbers up to and including four. Taken together, this evidence suggests that bees do have a capacity for sophisticated visual behaviours that fit a definition for cognition, and thus simple elemental models of bee vision need to take account of how a variety of factors may influence the type of results one may gain from animal behaviour experiments.

  9. Children's Performance on Two Tasks of Visual and Tactual Discrimination.

    Science.gov (United States)

    Cronin, Virginia

    1982-01-01

    Reports the results of two experiments dealing with children's visual and tactual performance. In the first task, after several presentations of a series, the tactual group made almost errorless discriminations. But with memory demands, tactual performance became poorer than visual performance. Found a large developmental difference. (JAC)

  10. Model of visual search and selection time in linear menus

    OpenAIRE

    Bailly, G.; Oulasvirta, A.; Brumby, D. P.; Howes, A.

    2014-01-01

    This paper presents a novel mathematical model for visual search and selection time in linear menus. Assuming two visual search strategies, serial and directed, and a pointing sub-task, it captures the change of performance with five factors: 1) menu length, 2) menu organization, 3) target position, 4) absence/presence of target, and 5) practice. The novel aspect is that the model is expressed as probability density distribution of gaze, which allows for deriving total selection time. We pres...

  11. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Macular pigment and visual performance in glare: benefits for photostress recovery, disability glare, and visual discomfort.

    Science.gov (United States)

    Stringham, James M; Garcia, Paul V; Smith, Peter A; McLin, Leon N; Foutch, Brian K

    2011-09-22

    One theory of macular pigment's (MP) presence in the fovea is to improve visual performance in glare. This study sought to determine the effect of MP level on three aspects of visual performance in glare: photostress recovery, disability glare, and visual discomfort. Twenty-six subjects participated in the study. Spatial profiles of MP optical density were assessed with heterochromatic flicker photometry. Glare was delivered via high-bright-white LEDs. For the disability glare and photostress recovery portions of the experiment, the visual task consisted of correct identification of a 1° Gabor patch's orientation. Visual discomfort during the glare presentation was assessed with a visual discomfort rating scale. Pupil diameter was monitored with an infrared (IR) camera. MP level correlated significantly with all the outcome measures. Higher MP optical densities (MPODs) resulted in faster photostress recovery times (average P disability glare contrast thresholds (average P visual discomfort (P = 0.002). Smaller pupil diameter during glare presentation significantly correlated with higher visual discomfort ratings (P = 0.037). MP correlates with three aspects of visual performance in glare. Unlike previous studies of MP and glare, the present study used free-viewing conditions, in which effects of iris pigmentation and pupil size could be accounted for. The effects described, therefore, can be extended more confidently to real-world, practical visual performance benefits. Greater iris constriction resulted (paradoxically) in greater visual discomfort. This finding may be attributable to the neurobiologic mechanism that mediates the pain elicited by light.

  13. Visualization and Analysis of Climate Simulation Performance Data

    Science.gov (United States)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and

  14. VISUAL ART TEACHERS AND PERFORMANCE ASSESSMENT ...

    African Journals Online (AJOL)

    Charles

    affect their competence in using assessment strategies in their classroom. The study employs a qualitative research design; an aspect of descriptive survey research aiming at depicting the situation of visual art classroom assessment practices. .... Encourage further the spirit of enquiry and creativity in teachers;. 3. Provide ...

  15. Automated visualization of rule-based models

    Science.gov (United States)

    Tapia, Jose-Juan; Faeder, James R.

    2017-01-01

    Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816

  16. Automated visualization of rule-based models.

    Science.gov (United States)

    Sekar, John Arul Prakash; Tapia, Jose-Juan; Faeder, James R

    2017-11-01

    Frameworks such as BioNetGen, Kappa and Simmune use "reaction rules" to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models.

  17. Automated visualization of rule-based models.

    Directory of Open Access Journals (Sweden)

    John Arul Prakash Sekar

    2017-11-01

    Full Text Available Frameworks such as BioNetGen, Kappa and Simmune use "reaction rules" to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i a compact rule visualization that efficiently displays each rule, (ii the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models.

  18. A Digital Simulation of Psychological Correlates of a Model of the Human Visual System.

    Science.gov (United States)

    model’s ability to exhibit Gestalt grouping principles and visual illusions. Psychological correlates were obtained by comparing human visual performance...to the computer model’s performance; the correlation factors were high. Patterns containing Gestalt grouping principles and various visual illusions

  19. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  20. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

    Directory of Open Access Journals (Sweden)

    Na Shu

    Full Text Available Humans can easily understand other people's actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1, and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model.

  1. Refractive surgery, optical aberrations, and visual performance.

    Science.gov (United States)

    Applegate, R A; Howland, H C

    1997-01-01

    Visual optics is taking on new clinical significance. Given that current refractive procedures can and do induce large amounts of higher order ocular aberration that often affects the patient's daily visual function and quality of life, we can no longer relegate the considerations of ocular aberrations to academic discussions. Instead, we need to move toward minimizing (not increasing) the eye's aberrations at the same time we are correcting the eye's spherical and cylindrical refractive error. These are exciting times in refractive surgery, which need to be tempered by the fact that after all the research, clinical, and marketing dust settles, the level to which we improve the quality of the retinal image will be guided by the trade-off between cost and the improvement in the quality of life that refractive surgery offers.

  2. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM...

  3. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  4. Expressing Model Constraints Visually with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald

    2011-01-01

    OCL is the de facto standard language for expressing constraints and queries on UML models. However, OCL expressions are very difficult to create, understand, and maintain, even with the sophisticated tool support now available. In this paper, we propose to use the Visual Model Query Language (VMQL...

  5. Stereoscopic visual fatigue assessment and modeling

    Science.gov (United States)

    Wang, Danli; Wang, Tingting; Gong, Yue

    2014-03-01

    Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.

  6. FLIP for FLAG model visualization

    Energy Technology Data Exchange (ETDEWEB)

    Wooten, Hasani Omar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    A graphical user interface has been developed for FLAG users. FLIP (FLAG Input deck Parser) provides users with an organized view of FLAG models and a means for efficiently and easily navigating and editing nodes, parameters, and variables.

  7. High Performance Computing and Visualization Infrastructure for Simultaneous Parallel Computing and Parallel Visualization Research

    Science.gov (United States)

    2016-11-09

    NVIDIA GPGPU’s, and 14 Intel Co-Phi processors. The visualization infrastructure is a next generation tiled Mini CAVE for semi-immersive visualization...compute nodes with 340 cores, 20 NVIDIA GPGPU’s, and 14 Intel Co-Phi processors. The visualization infrastructure is a next generation tiled Mini CAVE...gas engines, long-range acoustic propagation simulations, numerical modeling of nonlinear nanophotonic devices, and molecular dynamics simulations

  8. Human performance at sea assessed by dynamic visual acuity

    NARCIS (Netherlands)

    Bos, J.E.; Hogervorst, M.A.; Munnoch, K.; Perrault, D.

    2008-01-01

    Human performance may, among other things, depend on the ability to visually discern (small) objects. This ability is generally quantified under static conditions by means of the visual acuity, a measure of the minimum angle resolved by the eye. However, when the subject himself, his or her eyes,

  9. Variational adaptive image denoising model based on human visual system

    Science.gov (United States)

    Li, Wenjun; Liu, Chanjuan; Zou, Hailin

    2011-11-01

    A variational image adaptive denoising model based on human visual system is proposed by introducing control parameter p which can determine the diffusion intensity to Total Variation (TV) model. The model can adaptively select the value of parameter p according to human visual system noise visibility value of each pixel which makes diffusion intensity close to edges smaller than those far away from edges. For this method is more consistent with human perception, human eyes can perceive the improvement of image quality intuitively. Numerical experiments show that the proposed method can overcome staircase effect, remove the noise while preserving significant image details and better performance has been achieved.

  10. Integrating Visualizations into Modeling NEST Simulations

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W.

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work. PMID:26733860

  11. Performance improvements from imagery:evidence that internal visual imagery is superior to external visual imagery for slalom performance

    OpenAIRE

    Nichola eCallow; Ross eRoberts; Lew eHardy; Dan eJiang; Martin G Edwards

    2013-01-01

    We report three experiments investigating the hypothesis that use of internal visual imagery (IVI) would be superior to external visual imagery (EVI) for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group) performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under bo...

  12. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  13. Modeling, analysis, and visualization of anisotropy

    CERN Document Server

    Özarslan, Evren; Hotz, Ingrid

    2017-01-01

    This book focuses on the modeling, processing and visualization of anisotropy, irrespective of the context in which it emerges, using state-of-the-art mathematical tools. As such, it differs substantially from conventional reference works, which are centered on a particular application. It covers the following topics: (i) the geometric structure of tensors, (ii) statistical methods for tensor field processing, (iii) challenges in mapping neural connectivity and structural mechanics, (iv) processing of uncertainty, and (v) visualizing higher-order representations. In addition to original research contributions, it provides insightful reviews. This multidisciplinary book is the sixth in a series that aims to foster scientific exchange between communities employing tensors and other higher-order representations of directionally dependent data. A significant number of the chapters were co-authored by the participants of the workshop titled Multidisciplinary Approaches to Multivalued Data: Modeling, Visualization,...

  14. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  15. Learning expressive percussion performance under different visual feedback conditions

    OpenAIRE

    Brandmeyer, A.; Timmers, R.; Sadakata, M.; Desain, P.

    2010-01-01

    A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedb...

  16. Modeling Human Aesthetic Perception of Visual Textures

    NARCIS (Netherlands)

    Thumfart, Stefan; Jacobs, Richard H. A. H.; Lughofer, Edwin; Eitzinger, Christian; Cornelissen, Frans W.; Groissboeck, Werner; Richter, Roland

    2011-01-01

    Texture is extensively used in areas such as product design and architecture to convey specific aesthetic information. Using the results of a psychological experiment, we model the relationship between computational texture features and aesthetic properties of visual textures. Contrary to previous

  17. Numerical modeling of eastern connecticut's visual resources

    Science.gov (United States)

    Daniel L. Civco

    1979-01-01

    A numerical model capable of accurately predicting the preference for landscape photographs of selected points in eastern Connecticut is presented. A function of the social attitudes expressed toward thirty-two salient visual landscape features serves as the independent variable in predicting preferences. A technique for objectively assigning adjectives to landscape...

  18. Learning expressive percussion performance under different visual feedback conditions

    NARCIS (Netherlands)

    Brandmeyer, A.; Timmers, R.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their

  19. Photovoltaic array performance model.

    Energy Technology Data Exchange (ETDEWEB)

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  20. Can visual arts training improve physician performance?

    Science.gov (United States)

    Katz, Joel T; Khoshbin, Shahram

    2014-01-01

    Clinical educators use medical humanities as a means to improve patient care by training more self-aware, thoughtful, and collaborative physicians. We present three examples of integrating fine arts - a subset of medical humanities - into the preclinical and clinical training as models that can be adapted to other medical environments to address a wide variety of perceived deficiencies. This novel teaching method has promise to improve physician skills, but requires further validation.

  1. Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc

    2016-09-12

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.

  2. Ocular responses and visual performance after emergent acceleration stress.

    Science.gov (United States)

    Tsai, Ming-Ling; Horng, Chi-Ting; Liu, Chun-Cheng; Shieh, Pochuen; Hung, Chun-Ling; Lu, Da-Wen; Chiang, Shang-Yi; Wu, Yi-Cheng; Chiou, Wen-Yaw

    2011-11-07

    To evaluate visual function after emergent acceleration stress. Sixteen subjects were enrolled in this study. Human ejection seat trainer was used to induce six times gravitational force in the head-to-toe (z-axis) direction (+6 Gz). Visual performance was evaluated using the visual chart and contrast sensitivity (CS) at indicated times. Ocular reactions were assessed with biomicroscopy and topographic mapping. Temporary visual acuity reduction (0.02 ± 0.05 vs. 0.18 ± 0.08 logMAR visual acuity [VA]; P spatial frequencies immediately after ejection. However, CS returned to the initial range at high spatial frequency by 30 minutes. Emergent acceleration force induces significant ocular responses and visual fluctuation. Prolonged ACD deepening (>15 minutes) and PD (>30 minutes) were noted, but cornea and refraction remain stable. CS at all spatial frequencies revealed remarkable reduction immediately after ejection, and recovered to baseline levels within 30 minutes only at high spatial frequency. Neuroretinal function may involve visual fluctuation after acceleration stress, because visual fluctuation corresponds with the characters of neuroretinal function. However, further studies are necessary.

  3. Effect of different illumination sources on reading and visual performance

    Directory of Open Access Journals (Sweden)

    Male Shiva Ram

    2018-01-01

    Conclusion: This study demonstrates the influence of illumination on reading rate; there were no significant differences between males and females under different illuminations, however, males preferred CFL and females preferred FLUO for faster reading and visual comfort. Interestingly, neither preferred LED or TUNG. Although energy-efficient, visual performance under LED is poor; it is uncomfortable for prolonged reading and causes early symptoms of fatigue.

  4. Effect of Different Illumination Sources on Reading and Visual Performance.

    Science.gov (United States)

    Ram, Male Shiva; Bhardwaj, Rishi

    2018-01-01

    To investigate visual performance during reading under different illumination sources. This experimental quantitative study included 40 (20 females and 20 males) emmetropic participants with no history of ocular pathology. The participants were randomly assigned to read a near visual task under four different illuminations (400-lux constant): compact fluorescent light (CFL), tungsten light (TUNG), fluorescent tube light (FLUO), and light emitting diode (LED). Subsequently, we evaluated the participants' experiences of eight symptoms of visual comfort. The mean age of the participants was 19.86 ± 1.09 (range: 18-21) years. There was no statistically significant difference between the reading rates of males and females under the different illuminations ( P = 0.99); however, the reading rate was fastest among males under CFL, and among females under FLUO. One way analysis of variance (ANOVA) revealed a strong significant difference ( P = 0.001) between males and females ( P = 0.002) regarding the visual performance and illuminations. This study demonstrates the influence of illumination on reading rate; there were no significant differences between males and females under different illuminations, however, males preferred CFL and females preferred FLUO for faster reading and visual comfort. Interestingly, neither preferred LED or TUNG. Although energy-efficient, visual performance under LED is poor; it is uncomfortable for prolonged reading and causes early symptoms of fatigue.

  5. A probabilistic model for visual inspection of concrete shear walls

    Science.gov (United States)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    This paper presents a probabilistic model, called Bayesian networks, to visually assess the state of damage in reinforced concrete shear walls. The goal of this research is to reduce the inspection time and decrease the chance of missing or underestimating the state of damage in such structures. To develop this model, we define six types of visible damage on concrete shear walls. The model describes the causal relationship of such damage signs with the design parameters and damage states of the walls. To train and test the model, a database of all visually documented experimental works on concrete shear walls was collected from the literature. The model is trained on ninety percent of the database, and its performance is successfully validated on the ten percent remaining unseen portion of the database. The results show that the model can classify the images of yielded and failed walls. Additionally, it can prognosticate the most probable failure scenario for a yielded wall.

  6. Visual performance of two simultaneous vision multifocal contact lenses.

    Science.gov (United States)

    Madrid-Costa, David; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa; Montés-Micó, Robert

    2013-01-01

    To evaluate and compare the visual performance of two simultaneous vision multifocal contact lenses (CLs). In this cross-over study design 20 presbyopic subjects were fitted with two different simultaneous vision multifocal CLs (the PureVision Multifocal Low Add and Acuvue Oasys for Presbyopia) in random order. After 1 month, binocular distance visual acuity (BDVA) under photopic (85 cd/m(2)) and mesopic (3 cd/m(2)) conditions, binocular near visual acuity (BNVA), binocular distance contrast sensitivity function (CSF) under photopic and mesopic conditions, binocular near CSF and defocus curve were measured. Subjects were then refitted with the alternative correction and the procedure was repeated. Mean BDVA under photopic conditions was similar for the Acuvue Oasys for Presbyopia and PureVision Multifocal Low Add: 0.01 ± 0.08 and 0.00 ± 0.08 logMAR, respectively (P = 0.45). Under mesopic conditions the values of BDVA were 0.20 ± 0.58 and 0.11 ± 0.09 logMAR, respectively (P = 0.005). Mean BNVA was 0.20 ± 0.05 and 0.15 ± 0.08 logMAR for the Acuvue Oasys and PureVision Low Add, respectively (P = 0.06). Binocular distance CSF testing revealed no statistically significant differences between lenses under photopic, mesopic or near conditions. Both lenses provided a comparable intermediate visual acuity. Both simultaneous vision multifocal CLs provided adequate distance visual quality under photopic and mesopic conditions, and better visual acuity was provided under mesopic conditions for the Purevision lens. Both lenses provided adequate visual performance at intermediate distance, but the near visual acuity appears to be insufficient for early presbyopes who require a moderately demanding near visual quality. Ophthalmic & Physiological Optics © 2012 The College of Optometrists.

  7. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    Directory of Open Access Journals (Sweden)

    Krzepota Justyna

    2015-06-01

    Full Text Available In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12 and control (12. In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22=6.49, p<0.05 as well as the second factor, Training (F(3,66=5.06, p<0.01. The interaction between the two factors (Group vs. Training of perceptual training was F(3,66=6.82 (p<0.001. Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22=23.40, p<0.001, a main effect of a Training factor (F(3,66=11.60, p<0.001 and a significant interaction between factors (Group vs. Training (F(3,66=10.33, p<0.001. Our study suggests that 8-week training of visual functions can improve visual search performance.

  8. Predicting visual performance from optical quality metrics in keratoconus.

    Science.gov (United States)

    Schoneveld, Paul; Pesudovs, Konrad; Coster, Douglas J

    2009-05-01

    The aim was to identify optical quality metrics predictive of visual performance in eyes with keratoconus and penetrating keratoplasty (PK) for keratoconus. Fifty-four participants were recruited for this prospective, cross-sectional study. Data were collected from one eye of each participant: 26 keratoconus, 10 PK and 18 normal eyes: average age (mean +/- standard deviation) 45.2 +/- 10.6 years and 56 per cent female. Visual performance was tested by 10 methods including visual acuity (VA), both high and low contrast (HC- and LC-) and high and low luminance (LL-), and Pelli-Robson contrast sensitivity, all tested with and without glare. Corneal first surface wavefront aberrations were calculated from Orbscan corneal topographic data using VOLPro software v7.08 (Sarver and Associates) as a tenth-order Zernike expansion across three, 4.0 mm and 5.0 mm pupils and converted into 31 optical quality metrics. Pearson correlation coefficients and linear regression were used to relate wavefront aberration metrics to visual performance. Visual performance was highly predictable from optical quality with the average correlation of the order of 0.5. Pupil fraction metrics (for example, PFWc) were responsible for all of the highest correlations at large pupils for example, with HCVA (r = 0.80), LCVA (r = 0.80) and LLLCVA (r = 0.75). Image plane metrics, derived from the optical transfer function (OTF) were responsible for most of the highest correlations at smaller pupils for example, volume under the OTF (VOTF) with HCVA (r = 0.76) and LCVA (r = 0.73). As in normal eyes, visual performance in keratoconus was predicable from optical quality; albeit by different metrics. Optical quality metrics predictive of visual performance in normal eyes, for example, visual Strehl, lack the dynamic range to represent visual performance in highly aberrated eyes with keratoconus. Optical quality outcomes for keratoconus could be reported using many different metrics, but pupil fraction

  9. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...... discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging...

  10. An interference model of visual working memory.

    Science.gov (United States)

    Oberauer, Klaus; Lin, Hsuan-Yu

    2017-01-01

    The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  12. Functional Synergy Between Postural and Visual Behaviors When Performing a Difficult Precise Visual Task in Upright Stance.

    Science.gov (United States)

    Bonnet, Cédrick T; Szaffarczyk, Sébastien; Baudry, Stéphane

    2017-08-01

    Previous works usually report greater postural stability in precise visual tasks (e.g., gaze-shift tasks) than in stationary-gaze tasks. However, existing cognitive models do not fully support these results as they assume that performing an attention-demanding task while standing would alter postural stability because of the competition of attention between the tasks. Contrary to these cognitive models, attentional resources may increase to create a synergy between visual and postural brain processes to perform precise oculomotor behaviors. To test this hypothesis, we investigated a difficult searching task and a control free-viewing task. The precise visual task required the 16 young participants to find a target in densely furnished images. The free-viewing task consisted of looking at similar images without searching anything. As expected, the participants exhibited significantly lower body displacements (linear, angular) and a significantly higher cognitive workload in the precise visual task than in the free-viewing task. Most important, our exploration showed functional synergies between visual and postural processes in the searching task, that is, significant negative relationships showing lower head and neck displacements to reach more expended zones of fixation. These functional synergies seemed to involve a greater attentional demand because they were not significant anymore when the cognitive workload was controlled (partial correlations). In the free-viewing task, only significant positive relationships were found and they did not involve any change in cognitive workload. An alternative cognitive model and its potential subtended neuroscientific circuit are proposed to explain the supposedly cognitively grounded functional nature of vision-posture synergies in precise visual tasks. Copyright © 2016 Cognitive Science Society, Inc.

  13. Visual texture accurate material appearance measurement, representation and modeling

    CERN Document Server

    Haindl, Michal

    2013-01-01

    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  14. Developmental hypothyroidism disrupts visual signal detection performance in rats.

    Science.gov (United States)

    Hasegawa, Masashi; Wada, Hiromi

    2013-03-15

    Thyroid hormones (THs) are essential for proper brain development in mammals. TH insufficiency during early development causes structural and functional abnormalities in brain leading to cognitive dysfunction. The specific effects of developmental hypothyroidism on attention have not been well characterized in animal models. The present study was conducted to characterize the effects of developmental hypothyroidism on attention in rats, and tested the hypothesis that the hypothyroidism has adverse impacts on attention by means of a visual signal detection task. Pregnant rats were exposed to the anti-thyroid drug, methimazole (0.02% w/v) via drinking water from gestational day 15 through postnatal day (PND) 21 to induce maternal and neonatal hypothyroidism. Male offspring served as subjects for the task started on PND 90. A light stimulus (500 ms, 250 ms or 50 ms) was presented in signal trials and not in blank trials. The offspring were required to discriminate these signal events, and subsequently press the correct lever. The correct response for signal and non-signal events was considered as hit and correct rejection, respectively. The hypothyroid offspring exhibited a decreased hit response for short signals (250 ms and 50 ms) which requires the higher attentional demand. The total number of lever responses during inter-trial interval (ITI) was also increased in the hypothyroid group. The number of lever responses was negatively correlated with a hit response at 50 ms, not at 250 ms. These results suggest that developmental hypothyroidism disrupts signal detection performance via impairment of visual attention and the altered lever response behavior. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Superior performance for visually guided pointing in the lower visual field.

    Science.gov (United States)

    Danckert, J; Goodale, M A

    2001-04-01

    The superior hemiretina in primates and humans has a greater density of ganglion cells than the inferior hemiretina, suggesting a bias towards processing information in the lower visual field (loVF). In primates, this over-representation of the loVF is also evident at the level of striate and extrastriate cortex. This is particularly true in some of the visual areas constituting the dorsal "action" pathway, such as area V6A. Here we show that visually guided pointing movements with the hand are both faster and more accurate when performed in the loVF when compared to the same movements made in the upper visual field (upVF). This was true despite the fact that the biomechanics of the movements made did not differ across conditions. The loVF advantage for the control of visually guided pointing movements is unlikely to be due to retinal factors and may instead reflect a functional bias for controlling skilled movements in this region of space. Possible neural correlates for this loVF advantage for visually guided pointing are discussed.

  16. Image jitter enhances visual performance when spatial resolution is impaired.

    Science.gov (United States)

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  17. Opportunity for verbalization does not improve visual change detection performance : A state-trace analysis

    NARCIS (Netherlands)

    Sense, Florian; Morey, Candice C.; Prince, Melissa; Heathcote, Andrew; Morey, Richard D.

    Evidence suggests that there is a tendency to verbally recode visually-presented information, and that in some cases verbal recoding can boost memory performance. According to multi-component models of working memory, memory performance is increased because task-relevant information is

  18. Visual Motor and Perceptual Task Performance in Astigmatic Students

    Directory of Open Access Journals (Sweden)

    Erin M. Harvey

    2017-01-01

    Full Text Available Purpose. To determine if spectacle corrected and uncorrected astigmats show reduced performance on visual motor and perceptual tasks. Methods. Third through 8th grade students were assigned to the low refractive error control group (astigmatism < 1.00 D, myopia < 0.75 D, hyperopia < 2.50 D, and anisometropia < 1.50 D or bilateral astigmatism group (right and left eye ≥ 1.00 D based on cycloplegic refraction. Students completed the Beery-Buktenica Developmental Test of Visual Motor Integration (VMI and Visual Perception (VMIp. Astigmats were randomly assigned to testing with/without correction and control group was tested uncorrected. Analyses compared VMI and VMIp scores for corrected and uncorrected astigmats to the control group. Results. The sample included 333 students (control group 170, astigmats tested with correction 75, and astigmats tested uncorrected 88. Mean VMI score in corrected astigmats did not differ from the control group (p=0.829. Uncorrected astigmats had lower VMI scores than the control group (p=0.038 and corrected astigmats (p=0.007. Mean VMIp scores for uncorrected (p=0.209 and corrected astigmats (p=0.124 did not differ from the control group. Uncorrected astigmats had lower mean scores than the corrected astigmats (p=0.003. Conclusions. Uncorrected astigmatism influences visual motor and perceptual task performance. Previously spectacle treated astigmats do not show developmental deficits on visual motor or perceptual tasks when tested with correction.

  19. Effects of lighting and task parameters on visual acuity and performance

    Energy Technology Data Exchange (ETDEWEB)

    Halonen, L.

    1993-12-31

    Lighting and task parameters and their effects on visual acuity and visual performance are dealt with. The parameters studied are target contrast, target size and subject`s age; and also adaptation luminance, luminance ratio between task and its surrounding and temporal change in luminances are studied. Experiments were carried out to examine the effects of luminance and light spectrum on visual acuity. Young normally sighted, older and low vision people participated in the measurements. In the young and older subject groups the visual acuity remained unchanged at contrasts 0.93 and 0.63 at the luminance range of 15-630 cd/m{sub 2}. The results show that at contrasts 0.03-0.93 young and older subjects` visual acuity remained unchanged in the luminance range of 105-630 cd/m{sub 2}. In the low vision group, the changes in luminances between 25-860 cd/m{sub 2} did not have significant effects on visual acuity measured at high contrast 0.93, at low contrast, slight individual changes were found. The colour temperature of the light sources was altered between 2900-9500 K in the experiment. In the groups of the older, young and low vision subjects the light spectrum did not have significant effects on visual acuity, except for two retinitis pigmentosa subjects. On the basis of the visual acuity experiments, a three dimensional visual acuity model (VA-HUT) has been developed. The model predicts visual acuity as a function of luminance, target contrast and observer age. On the basis of visual acuity experiments visual acuity reserve values have been calculated for different text sizes

  20. Visual art teachers and performance assessment methods in ...

    African Journals Online (AJOL)

    This paper examines the competencies of visual arts teachers in using performance assessment methods, and to ascertain the extent to which the knowledge, skills and experiences of teachers affect their competence in using assessment strategies in their classroom. The study employs a qualitative research design; ...

  1. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This suggests that in the game of rugby the hardware skills may be of less importance and that visual enhancement programmes should focus more on improving the players' software skills. Key words: Vision, hardware, rugby, sports performance. (Af. J. Physical, Health Education, Recreation and Dance: 2003 Special ...

  2. Similarity, Not Complexity, Determines Visual Working Memory Performance

    Science.gov (United States)

    Jackson, Margaret C.; Linden, David E. J.; Roberts, Mark V.; Kriegeskorte, Nikolaus; Haenschel, Corinna

    2015-01-01

    A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased…

  3. Effect of marihuana and alcohol on visual search performance

    Science.gov (United States)

    1976-10-01

    Two experiments were performed to determine the effects of alcohol and marihuana on visual scanning patterns in a simulated driving situation. In the first experiment 27 male heavy drinkers were divided into 3 groups of 9, defined by three blood alco...

  4. The body voyage as visual representation and art performance.

    Science.gov (United States)

    Olsén, Jan Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition by the French artist Christian Boltanski, which gives a somewhat different meaning to the idea of the body voyage.

  5. The body voyage as visual representation and art performance

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal...... with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition...... by the French artist Christian Boltanski, which gives a somewhat different meaning to the idea of the body voyage....

  6. Contrast Insensitivity: The Critical Immaturity in Infant Visual Performance

    Science.gov (United States)

    Brown, Angela M.; Lindsey, Delwin T.

    2009-01-01

    This is a targeted review of the critical immaturities limiting psychophysical luminance contrast detection in human infants. Three-month-old infants are 50 times less sensitive to contrast than adults are. Rod experiments suggest that early-stage immaturities, like the short length of infant rod outer segments, have only a modest direct effect on infant visual performance. Infant contrast sensitivity may resemble adult extrafoveal sensitivity, because the foveal cones of the neonate are immature and may not generate strong enough responses to mediate visual performance. This use of the extrafoveal retina reduces the high-spatial-frequency end of the infant contrast sensitivity function, contributing to poor infant resolution acuity. The remaining difference between infant and adult contrast sensitivity functions may be a simple overall reduction in infant sensitivity. The maximum of the infant contrast sensitivity function increases proportionately with age, and may be numerically near the infant's age in weeks. Contrast discrimination experiments indicate that the critical immaturity that limits infant contrast sensitivity is a mid-level phenomenon, occurring before the site of the contrast gain control. For example, the infant ascending visual pathway might be limited by large amounts of intrinsic noise. These results suggest that there is little effect of inattentiveness to the psychophysical task by ostensibly alert infant patients or subjects. The clinician or researcher can interpret behavioral measurements of infant visual performance with confidence. PMID:19483510

  7. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  8. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma

    Directory of Open Access Journals (Sweden)

    Viswa Gangeddula

    2017-08-01

    Full Text Available Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma.Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1, dynamic visual field condition (C2, and dynamic visual field condition with active driving (C3 using an interactive, desktop driving simulator. The number of correct responses (accuracy and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions.Results: Adding cognitive demand (C2 and C3 to the static visual field test (C1 adversely affected accuracy and response times, in both groups (p < 0.05. However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3 3 (2–6.50 vs. controls: 2 (0.50–2.50; p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6 vs. controls: 1 (0.50–2; p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02.Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.

  9. Effects of visual feedback on manipulation performance and patient ratings.

    Science.gov (United States)

    Triano, John J; Scaringe, John; Bougie, Jacqueline; Rogers, Carolyn

    2006-06-01

    This study examined the explicit targeted outcome (a criterion standard) and visual feedback on the immediate change in and the short-term retention of performance by novice operators for a high-velocity, low-amplitude procedure under realistic conditions. This study used a single-blind randomized experimental design. Forty healthy male (n = 26) and female (n = 14) chiropractic student volunteers with no formal training in spinal manipulative therapy participated. Biomechanical parameters of an L4 mammillary push spinal manipulation procedure performed by novice operators were quantified. Participants were randomly assigned to 2 groups and paired. One group received visual feedback from load-time histories of their performance compared with a criterion standard before a repeat performance. Participants then performed a 10-minute distractive exercise consisting of National Board of Chiropractic Examiners review questions. The second group received no feedback. An independent rating of performance was conducted for each participant by his/her partner. Results were analyzed separately for biomechanical parameters for partner ratings using the Student t test with levels of significance (P visual feedback was associated with change in the biomechanical performance of group 2, a minimum of 14% and a maximum of 32%. Statistical analysis rating of the performance favored the feedback group on 4 of the parameters (fast, P < .0008; force, P < .0056; precision, P < .0034; and composite, P < .0016). Quantitative feedback, based on a tangible conceptualization of the target performance, resulted in immediate and significant improvement in all measured parameters. Newly developed skills were retained at least over short intervals even after distractive tasks. Learning what to do with feedback on one's own performance may be more important than the classic teaching of how to do it.

  10. Learning expressive percussion performance under different visual feedback conditions.

    Science.gov (United States)

    Brandmeyer, Alex; Timmers, Renee; Sadakata, Makiko; Desain, Peter

    2011-03-01

    A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.

  11. Pitch height modulates visual and haptic bisection performance in musicians

    Directory of Open Access Journals (Sweden)

    Carlotta eLega

    2014-04-01

    Full Text Available Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and an horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the SMARC effect, with the strength of these associations depending on individuals’ musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization.

  12. Performance improvements from imagery:evidence that internal visual imagery is superior to external visual imagery for slalom performance

    Directory of Open Access Journals (Sweden)

    Nichola eCallow

    2013-10-01

    Full Text Available We report three experiments investigating the hypothesis that use of internal visual imagery (IVI would be superior to external visual imagery (EVI for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under both IVI and EVI conditions. Performance was again quickest in the IVI compared to EVI condition, with no differences in accuracy. Experiment 3 used the same group design as Experiment 1, but with participants performing a downhill ski-slalom task. Results revealed the IVI group to be significantly more accurate than the control group, with no significant differences in time taken to complete the task. These results support the beneficial effects of IVI for slalom-based tasks, and significantly advances our knowledge related to the differential effects of visual imagery perspectives on motor performance.

  13. Performance improvements from imagery: evidence that internal visual imagery is superior to external visual imagery for slalom performance.

    Science.gov (United States)

    Callow, Nichola; Roberts, Ross; Hardy, Lew; Jiang, Dan; Edwards, Martin Gareth

    2013-01-01

    We report three experiments investigating the hypothesis that use of internal visual imagery (IVI) would be superior to external visual imagery (EVI) for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group) performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under both IVI and EVI conditions. Performance was again quickest in the IVI compared to EVI condition, with no differences in accuracy. Experiment 3 used the same group design as Experiment 1, but with participants performing a downhill ski-slalom task. Results revealed the IVI group to be significantly more accurate than the control group, with no significant differences in time taken to complete the task. These results support the beneficial effects of IVI for slalom-based tasks, and significantly advances our knowledge related to the differential effects of visual imagery perspectives on motor performance.

  14. Adaptive luminance contrast for enhancing reading performance and visual comfort on smartphone displays

    Science.gov (United States)

    Na, Nooree; Suk, Hyeon-Jeong

    2014-11-01

    This study developed a model for setting the adaptive luminance contrast between text and background for enhancing reading performance and visual comfort on smartphone displays. The study was carried out in two experiments. In Experiment I, a user test was conducted to identify the optimal luminance contrast with regard to subjects' reading performance, measured by lines of text reading and visual comfort, assessed by self-report after the reading. Based on the empirical results of the test, an ideal adaptive model which decreases the luminance contrast gradually with passage of time was developed. In Experiment II, a validation test involving reading performance, visual comfort, and physiological stress measured by a brainwave analysis using an electroencephalogram confirmed that the proposed adaptive luminance contrast is adequate for prolonged text reading on smartphone displays. The developed model enhances both reading performance and visual comfort as well as reduces the energy consumption of a smartphone; hence, it is expected that this study will be applied to diverse kinds of visual display terminals.

  15. Visualizations of Travel Time Performance Based on Vehicle Reidentification Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley Ernest [National Renewable Energy Lab, 15013 Denver West Parkway, Golden, CO 80401; Sharifi, Elham [Center for Advanced Transportation Technology, University of Maryland, College Park, Technology Ventures Building, Suite 2200, 5000 College Avenue, College Park, MD 20742; Day, Christopher M. [Joint Transportation Research Program, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906; Bullock, Darcy M. [Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906

    2017-01-01

    This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from various time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.

  16. Behavioral model of visual perception and recognition

    Science.gov (United States)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and

  17. Model Curriculum Standards: Grades Nine through Twelve. English/Language Arts, Foreign Language, History-Social Science, Mathematics, Science, Visual and Performing Arts. First Edition.

    Science.gov (United States)

    California State Dept. of Education, Sacramento.

    Designed for use with students in grades nine through twelve, the model curriculum standards in this guide were developed in response to Senate Bill 813 (Chapter 498, Statutes of 1983) of the California Legislature that focused on the reestablishment of high expectations for the content of courses taught in secondary schools and for the level of…

  18. The marmoset monkey as a model for visual neuroscience

    Science.gov (United States)

    Mitchell, Jude F.; Leopold, David A.

    2015-01-01

    The common marmoset (Callithrix jacchus) has been valuable as a primate model in biomedical research. Interest in this species has grown recently, in part due to the successful demonstration of transgenic marmosets. Here we examine the prospects of the marmoset model for visual neuroscience research, adopting a comparative framework to place the marmoset within a broader evolutionary context. The marmoset’s small brain bears most of the organizational features of other primates, and its smooth surface offers practical advantages over the macaque for areal mapping, laminar electrode penetration, and two-photon and optical imaging. Behaviorally, marmosets are more limited at performing regimented psychophysical tasks, but do readily accept the head restraint that is necessary for accurate eye tracking and neurophysiology, and can perform simple discriminations. Their natural gaze behavior closely resembles that of other primates, with a tendency to focus on objects of social interest including faces. Their immaturity at birth and routine twinning also makes them ideal for the study of postnatal visual development. These experimental factors, together with the theoretical advantages inherent in comparing anatomy, physiology, and behavior across related species, make the marmoset an excellent model for visual neuroscience. PMID:25683292

  19. Visual-Motor Control of Steering and Awareness of Performance

    Directory of Open Access Journals (Sweden)

    Callum Mole

    2012-05-01

    Full Text Available In order to carry out skilled, visually guided actions, humans need to be able to use feedback to monitor and adjust performance. Such feedback can be relatively low level, with some motor commands being recalibrated rapidly based on visual feedback with little cognitive awareness (eg, Mon-Williams and Murray 2000. In tasks such as driving, however, awareness of performance could be important for making strategic adjustments in order to respond to road conditions. To investigate whether participants could accurately gauge steering performance, we used a simulated driving scenario. Participants (n=30 were required to steer around a series of bends and stay within a central marked zone. In order to alter the task demands, the speed of travel (fast/slow and the width of the zone (narrow/wide were manipulated. After each bend the participants made an explicit percentage judgment of time spent within the required zone. The mean group steering results showed that performance was worst for faster speeds and narrower zones, and this pattern was matched in the awareness judgements. Closer inspection of the data, however, showed that at an individual level these judgements often failed to capture trial performance and were often merely influenced by the visible task characteristics (eg, fast and narrow trial. This suggests that participants may erroneously rely on salient cues about task characteristics to assess performance if direct feedback is weak. This has important implications for driving since individuals may be failing to respond to situation characteristics that actually make performance worse.

  20. Measuring the performance of visual to auditory information conversion.

    Directory of Open Access Journals (Sweden)

    Shern Shiou Tan

    Full Text Available BACKGROUND: Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. METHODOLOGY: Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID and inter sound distance (ISD whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. CONCLUSIONS: With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  1. Improved custom statistics visualization for CA Performance Center data

    CERN Document Server

    Talevi, Iacopo

    2017-01-01

    The main goal of my project is to understand and experiment the possibilities that CA Performance Center (CA PC) offers for creating custom applications to display stored information through interesting visual means, such as maps. In particular, I have re-written some of the network statistics web pages in order to fetch data from new statistics modules in CA PC, which has its own API, and stop using the RRD data.

  2. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  4. Human performance on visually presented Traveling Salesman problems.

    Science.gov (United States)

    Vickers, D; Butavicius, M; Lee, M; Medvedev, A

    2001-01-01

    Little research has been carried out on human performance in optimization problems, such as the Traveling Salesman problem (TSP). Studies by Polivanova (1974, Voprosy Psikhologii, 4, 41-51) and by MacGregor and Ormerod (1996, Perception & Psychophysics, 58, 527-539) suggest that: (1) the complexity of solutions to visually presented TSPs depends on the number of points on the convex hull; and (2) the perception of optimal structure is an innate tendency of the visual system, not subject to individual differences. Results are reported from two experiments. In the first, measures of the total length and completion speed of pathways, and a measure of path uncertainty were compared with optimal solutions produced by an elastic net algorithm and by several heuristic methods. Performance was also compared under instructions to draw the shortest or the most attractive pathway. In the second, various measures of performance were compared with scores on Raven's advanced progressive matrices (APM). The number of points on the convex hull did not determine the relative optimality of solutions, although both this factor and the total number of points influenced solution speed and path uncertainty. Subjects' solutions showed appreciable individual differences, which had a strong correlation with APM scores. The relation between perceptual organization and the process of solving visually presented TSPs is briefly discussed, as is the potential of optimization for providing a conceptual framework for the study of intelligence.

  5. Manual tapping enhances visual short-term memory performance where visual and motor coordinates correspond.

    Science.gov (United States)

    Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian

    2013-05-01

    Visuo-manual interaction in visual short-term memory (VSTM) has been investigated little, despite its importance in everyday tasks requiring the coordination of visual perception and manual action. This study examines the influence of a manual action performed during stimulus learning on a subsequent VSTM test for object appearance. The memory display comprised a sequence of briefly presented 1/f noise discs (i.e., possessing spectral properties akin to natural images), wherein each new stimulus was presented at a unique screen location. Participants either did (or did not) perform a concurrent manual action (spatial tapping) task requiring that a hand-held stylus be moved to a position on a touch tablet that corresponded (or did not correspond) to the screen position of each new stimulus as it appeared. At test, a single stimulus was presented, either at one of the original screen positions, or at a new position. Two factors were examined: the execution (or otherwise) of spatial tapping at a corresponding or non-corresponding position, and the presentation of test stimuli either at their original spatial positions, or at new positions. We find that spatial tapping at corresponding positions elevates VSTM performance by more than 15%, but this occurs only when stimulus positions are matched from memory to test display. Our findings suggest that multimodal attentional focus during stimulus encoding (incorporating visual, spatial, and manual components) leads to stronger, more robust memory representations. We posit several possible explanations for this effect. © 2012 The British Psychological Society.

  6. Effect of vibration on visual display terminal work performance.

    Science.gov (United States)

    Hsieh, Yao-Hung; Lin, Chiuhsiang Joe; Chen, Hsiao-Ching

    2007-12-01

    Today electronic visual displays have dramatic use in daily life. Reading these visual displays is subject to their vibration. Using a software-simulation of a vibrated environment, the study investigated the effect of vibration on visual performance and fatigue for several numerical display design characteristics including the font size and the number of digits displayed. Both the frequency and magnitude of vibration had significant effects on the reaction time, accuracy, and visual fatigue. 10 graduate students (23-30 years old; M = 25.6), randomly tested in this experiment, were offered about 25 U.S. dollars for their participation. Numbers in vertical presentation were affected more in vertical vibration than those in horizontal presentation. Analysis showed whenever the display is used in vibration environment, an increased font size may be an effective way to compensate the adverse effect of vibration. The software design of displayed materials must be designed to take the motion effect into consideration to increase the quality of the screen display.

  7. Digital Technologies and performative pedagogies: Repositioning the visual

    Directory of Open Access Journals (Sweden)

    Kathryn Grushka

    2010-05-01

    Full Text Available Images are becoming a primary means of information presentation in the digitized global media and digital technologies have emancipated and democratized the image. This allows for the reproduction and manipulation of images on a scale never seen before and opens new possibilities for teachers schooled in critical visuality. This paper reports on an innovative pre-service teacher training course in which a cross-curricula cohort of secondary teachers employed visual performative competencies to produce a series of learning objects on a digital platform. The resulting intertextual narratives demonstrate that the manipulation of image and text offered by digital technologies create a powerful vehicle for investigating knowledge and understandings, evolving new meaning and awakening latent creativity in the use of images for meaning making. This research informs the New Literacies and multimodal fields of enquiry and argues that visuality is integral to any pedagogy that purports to be relevant to the contemporary learner. It argues that the visual has been significantly under-valued as a conduit for knowledge acquisition and meaning making in the digital environment and supports the claim that critical literacy, interactivity, experimentation and production are vital to attaining the tenets of transformative education (Buckingham, 2007; Walsh, 2007; Cope & Kalantzis, 2008.

  8. Binocular advantage for prehension movements performed in visually enriched environments requiring visual search

    Directory of Open Access Journals (Sweden)

    Roshani eGnanaseelan

    2014-11-01

    Full Text Available The purpose of this study was to examine the role of binocular vision during a prehension task performed in a visually enriched environment where the target object was surrounded by distractors/obstacles. Fifteen adults reached and grasped for a cylindrical peg while eye movements and upper limb kinematics were recorded. The complexity of the visual environment was manipulated by varying the number of distractors and the saliency of the target. Gaze behavior (i.e., the latency of the primary gaze shift and frequency of gaze shifts prior to reach initiation was comparable between viewing conditions. In contrast, a binocular advantage was evident in performance accuracy. Specifically, participants picked up the wrong object twice as often during monocular viewing when the complexity of the environment increased. Reach performance was more efficient during binocular viewing, which was demonstrated by shorter reach reaction time and overall movement time. Reaching movements during the approach phase had higher peak velocity during binocular viewing. During monocular viewing reach trajectories exhibited a direction bias during the acceleration phase, which was leftward during left eye viewing and rightward during right eye viewing. This bias can be explained by the presence of esophoria in the covered eye. The grasping interval was also extended by ~20% during monocular viewing. In conclusion, binocular vision provides important input for planning and execution of prehension movements in visually enriched environments. Binocular advantage was evident, regardless of set size or target saliency, indicating that adults plan their movements more cautiously during monocular viewing, even in relatively simple environments with a highly salient target. Nevertheless, in visually-normal adults monocular input provides sufficient information to engage in online control to correct the initial errors in movement planning.

  9. Using a visual plate waste study to monitor menu performance.

    Science.gov (United States)

    Connors, Priscilla L; Rozell, Sarah B

    2004-01-01

    Two visual plate waste studies were conducted in 1-week phases over a 1-year period in an acute care hospital. A total of 383 trays were evaluated in the first phase and 467 in the second. Food items were ranked for consumption from a low (1) to high (6) score, with a score of 4.0 set as the benchmark denoting a minimum level of acceptable consumption. In the first phase two entrees, four starches, all of the vegetables, sliced white bread, and skim milk scored below the benchmark. As a result six menu items were replaced and one was modified. In the second phase all entrees scored at or above 4.0, as did seven vegetables, and a dinner roll that replaced sliced white bread. Skim milk continued to score below the benchmark. A visual plate waste study assists in benchmarking performance, planning menu changes, and assessing effectiveness.

  10. Binocular visual performance and summation after correcting higher order aberrations.

    Science.gov (United States)

    Sabesan, Ramkumar; Zheleznyak, Len; Yoon, Geunyoung

    2012-12-01

    Although the ocular higher order aberrations degrade the retinal image substantially, most studies have investigated their effect on vision only under monocular conditions. Here, we have investigated the impact of binocular higher order aberration correction on visual performance and binocular summation by constructing a binocular adaptive optics (AO) vision simulator. Binocular monochromatic aberration correction using AO improved visual acuity and contrast sensitivity significantly. The improvement however, differed from that achieved under monocular viewing. At high spatial frequency (24 c/deg), the monocular benefit in contrast sensitivity was significantly larger than the benefit achieved binocularly. In addition, binocular summation for higher spatial frequencies was the largest in the presence of subject's native higher order aberrations and was reduced when these aberrations were corrected. This study thus demonstrates the vast potential of binocular AO vision testing in understanding the impact of ocular optics on habitual binocular vision.

  11. Modeling human judgments of urban visual air quality

    Science.gov (United States)

    Middleton, Paulette; Stewart, Thomas R.; Dennis, Robin L.

    The overall approach to establishing a complete predictive model link between pollutant emissions and human judgments of urban visual air quality (UVAQ) is presented. The field study design and data analysis procedures developed for analyzing the human components of visual air quality assessment are outlined. The air quality simulation model which relates pollutant emissions to human judgments of visual cues which comprise visual air quality judgments is described. Measured and modeled cues are compared for five typical visual air quality days in the winter of 1981 for Denver, Colorado. The comparisons suggest that the perceptual cue model, based on dispersion and radiative transfer theory, does not adequately predict human judgments of UVAQ cues. Analysis of the limits of predictability of the human judgments and the predictive capability of the model components indicates that the greatest improvements toward achieving a predictive UVAQ model lie in a reformulation of the theoretical descriptions of visual cues.

  12. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  13. Diagnostic Performance of Visual Screening Tests in the Elderly

    Science.gov (United States)

    Lança, Carla Costa; Carolino, Elisabete

    2011-09-01

    This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. This study is defined as study of diagnostic performance. The diagnostic accuracy of 5 visual tests -near convergence point, near accommodation point, stereopsis, contrast sensibility and amsler grid—was evaluated by means of the ROC method (receiver operating characteristics curves), sensitivity, specificity, positive and negative likelihood ratios (LR+/LR-). Visual acuity was used as the reference standard. A sample of 44 elderly aged 76.7 years (±9.32), who were institutionalized, was collected. The curves of contrast sensitivity and stereopsis are the most accurate (area under the curves were 0.814-p = 0.001, C.I.95%[0.653;0.975]— and 0.713-p = 0.027, C.I.95%[0,540;0,887], respectively). The scores with the best diagnostic validity for the stereopsis test were 0.605 (sensitivity 0.87, specificity 0.54; LR+ 1.89, LR-0.24) and 0.610 (sensitivity 0.81, specificity 0.54; LR+ 1.75, LR-0.36). The scores with higher diagnostic validity for the contrast sensibility test were 0.530 (sensitivity 0.94, specificity 0.69; LR+ 3.04, LR-0.09). The contrast sensitivity and stereopsis test's proved to be clinically useful in detecting vision loss in the elderly.

  14. A dual-trace model for visual sensory memory.

    Science.gov (United States)

    Cappiello, Marcus; Zhang, Weiwei

    2016-11-01

    Visual sensory memory refers to a transient memory lingering briefly after the stimulus offset. Although previous literature suggests that visual sensory memory is supported by a fine-grained trace for continuous representation and a coarse-grained trace of categorical information, simultaneous separation and assessment of these traces can be difficult without a quantitative model. The present study used a continuous estimation procedure to test a novel mathematical model of the dual-trace hypothesis of visual sensory memory according to which visual sensory memory could be modeled as a mixture of 2 von Mises (2VM) distributions differing in standard deviation. When visual sensory memory and working memory (WM) for colors were distinguished using different experimental manipulations in the first 3 experiments, the 2VM model outperformed Zhang and Luck (2008) standard mixture model (SM) representing a mixture of a single memory trace and random guesses, even though SM outperformed 2VM for WM. Experiment 4 generalized 2VM's advantages of fitting visual sensory memory data over SM from color to orientation. Furthermore, a single trace model and 4 other alternative models were ruled out, suggesting the necessity and sufficiency of dual traces for visual sensory memory. Together these results support the dual-trace model of visual sensory memory and provide a preliminary inquiry into the nature of information loss from visual sensory memory to WM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Inflight performance of the Viking visual imaging subsystem

    Science.gov (United States)

    Klaasen, K. P.; Thorpe, T. E.; Morabito, L. A.

    1977-01-01

    Photography from the Viking Orbiter Visual Imaging Subsystem, taken while enroute to and in orbit about Mars, has been analyzed to determine the performance of the cameras. The cameras have remained in good focus. Random and coherent noise levels in flight were the same as measured prior to launch. A recalibration of each instrument allows photometric measurements to accuracies of less than 3% for relative measurements and 9% for absolute measurements. Geometric distortion remained close to the preflight levels of 4 pixels rms and 11 pixels maximum.

  16. Anticipatory alpha phase influences visual working memory performance.

    Science.gov (United States)

    Zanto, Theodore P; Chadick, James Z; Gazzaley, Adam

    2014-01-15

    Alpha band (8-12 Hz) phase dynamics in the visual cortex are thought to reflect fluctuations in cortical excitability that influences perceptual processing. As such, visual stimuli are better detected when their onset is concurrent with specific phases of the alpha cycle. However, it is unclear whether alpha phase differentially influences cognitive performance at specific times relative to stimulus onset (i.e., is the influence of phase maximal before, at, or after stimulus onset?). To address this, participants performed a delayed-recognition, working memory (WM) task for visual motion direction during two separate visits. The first visit utilized functional magnetic resonance (fMRI) imaging to identify neural regions associated with task performance. Replicating previous studies, fMRI data showed engagement of visual cortical area V5, as well as a prefrontal cortical region, the inferior frontal junction (IFJ). During the second visit, transcranial magnetic stimulation (TMS) was applied separately to both the right IFJ and right V5 (with the vertex as a control region) while electroencephalography (EEG) was simultaneously recorded. During each trial, a single pulse of TMS (spTMS) was applied at one of six time points (-200, -100, -50, 0, 80, 160 ms) relative to the encoded stimulus onset. Results demonstrated a relationship between the phase of the posterior alpha signal prior to stimulus encoding and subsequent response times to the memory probe two seconds later. Specifically, spTMS to V5, and not the IFJ or vertex, yielded faster response times, indicating improved WM performance, when delivered during the peak, compared to the trough, of the alpha cycle, but only when spTMS was applied 100 ms prior to stimulus onset. These faster responses to the probe correlated with decreased early event related potential (ERP) amplitudes (i.e., P1) to the probe stimuli. Moreover, participants that were least affected by spTMS exhibited greater functional connectivity

  17. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations....

  18. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations....

  19. A visual approach for modeling spatiotemporal relations

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares

    2008-01-01

    htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for

  20. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  1. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  2. Modeling and evaluating user behavior in exploratory visual analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.; Leigh, Jason

    2016-07-25

    Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.

  3. Visual search performance in infants associates with later ASD diagnosis.

    Science.gov (United States)

    Cheung, C H M; Bedford, R; Johnson, M H; Charman, T; Gliga, T

    2016-09-30

    An enhanced ability to detect visual targets amongst distractors, known as visual search (VS), has often been documented in Autism Spectrum Disorders (ASD). Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD) and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD) had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Rey Visual Design Learning Test performance correlates with white matter structure

    OpenAIRE

    Begré, Stefan; Kiefer, Claus; von Känel, Roland; Frommer, Angela; Federspiel, Andrea

    2017-01-01

    Objective: Studies exploring relation of visual memory to white matter are extensively lacking. The Rey Visual Design Learning Test (RVDLT) is an elementary motion, colour and word independent visual memory test. It avoids a significant contribution from as many additional higher order visual brain functions as possible to visual performance, such as three-dimensional, colour, motion or word-dependent brain operations. Based on previous results, we hypothesised that test performance would be ...

  5. Administration of Dehydroepiandrosterone (DHEA) Enhances Visual-Spatial Performance in Post-Menopausal Women

    OpenAIRE

    Stangl, Bethany; Hirshman, Elliot; Verbalis, Joseph

    2011-01-01

    The current paper examines the effect of administering Dehydroepiandrosterone (DHEA) on visual-spatial performance in post-menopausal women (N=24, ages 55-80). The concurrent reduction of serum DHEA levels and visual-spatial performance in this population, coupled with the documented effects of DHEA’s androgenic metabolites on visual-spatial performance, suggest that DHEA administration may enhance visual-spatial performance. The current experiment used a double-blind placebo-controlled cross...

  6. Optical information for car following: the driving by visual angle (DVA) model.

    Science.gov (United States)

    Andersen, George J; Sauer, Craig W

    2007-10-01

    The present study developed and tested a model of car following by human drivers. Previous models of car following are based on 3-D parameters such as lead vehicle speed and distance information, which are not directly available to a driver. In the present paper we present the driving by visual angle (DVA) model, which is based on the visual information (visual angle and rate of change of visual angle) available to the driver. Two experiments in a driving simulator examined car-following performance in response to speed variations of a lead vehicle defined by a sum of sine wave oscillations and ramp acceleration functions. In addition, the model was applied to six driving events using real world-driving data. The model provided a good fit to car-following performance in the driving simulation studies as well as in real-world driving performance. A comparison with the advanced interactive microscopic simulator for urban and nonurban networks (AIMSUN) model, which is based on 3-D parameters, suggests that the DVA was more predictive of driver behavior in matching lead vehicle speed and distance headway. Car-following behavior can be modeled using only visual information to the driver and can produce performance more predictive of driver performance than models based on 3-D (speed or distance) information. The DVA model has applications to several traffic safety issues, including automated driving systems and traffic flow models.

  7. DynaView: General Dynamic Visualization Model for SHM

    Directory of Open Access Journals (Sweden)

    Peng Sun

    2012-01-01

    Full Text Available We present a general dynamic visualization model named DynaView to construct virtual scenes of structural health monitoring (SHM process. This model consists of static, dynamic, and interaction submodels. It makes the visualization process dynamic and interactive. By taking an example of a simplified reinforced concrete beam structure model, we obtain raw data through the examination. We conduct the effective general and practicable assessment of structural damage conditions based on fuzzy pattern recognition to compute the assessment results. We construct the DynaView model of the sample structure and visualize it. The instance indicates that DynaView model is efficient and practically applicable.

  8. Performance analysis and visualization of electric power systems

    Science.gov (United States)

    Dong, Xuejiang; Shinozuka, Masanobu

    2003-08-01

    This paper describes a method of system performance evaluation for electric power network. The basic element that plays a crucial role here is the fragility information for transmission system equipment. The method utilizes the fragility information for evaluation of system performance degradation of LADWP's (Los Angeles Department of Water and Power's) power network damaged by a severe earthquake by comparing its performance before and after the earthquake event. One of the highlights of this paper is the use of computer code "PowerWorld" to visualize the state of power flow of the network, segment by segment. Similarly, the method can evaluate quantitatively the effect of various measures of rehabilitation or retrofit performed on equipment and/or facilities of the network. This is done by comparing the system performance with or without the rehabilitation. In this context, the results of experimental and analytical studies carried out by other researchers are used to determine the possible range of fragility enhancement associated with the rehabilitation of transformers in terms of base-isolation systems. In this analysis, 47 scenario earthquakes are used to develop the risk curves for the LADWP"s power transmission system. The risk curve can then be correlated to economic impact of the reduction in power supply due to earthquake. Recovery aspects of the damaged power system will be studied from this point of view in future.

  9. Modern Notation of Business Models: A Visual Trend

    OpenAIRE

    Tatiana Gavrilova; Artem Alsufyev; Anna-Sophia Yanson

    2014-01-01

    Information overflow and dynamic market changes encourage managers to search for a relevant and eloquent model to describe their business. This paper provides a new framework for visualizing business models, guided by well-shaped visualization based on a mind mapping technique introduced by Tony Buzan. The authors’ approach amplifies Alexander Ostervalder’s ideas on the new level of abstraction and well-structured description of business models. It also seeks to simplify the Osterwalder model...

  10. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  11. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  12. Classification across the senses: Auditory-visual cognitive performance in a California sea lion (Zalophus californianus)

    Science.gov (United States)

    Lindemann, Kristy L.; Reichmuth-Kastak, Colleen; Schusterman, Ronald J.

    2005-09-01

    The model of stimulus equivalence describes how perceptually dissimilar stimuli can become interrelated to form useful categories both within and between the sensory modalities. A recent experiment expanded upon prior work with a California sea lion by examining stimulus classification across the auditory and visual modalities. Acoustic stimuli were associated with an exemplar from one of two pre-existing visual classes in a matching-to-sample paradigm. After direct training of these associations, the sea lion showed spontaneous transfer of the new auditory stimuli to the remaining members of the visual classes. The sea lion's performance on this cross-modal equivalence task was similar to that shown by human subjects in studies of emergent word learning and reading comprehension. Current research with the same animal further examines how stimulus classes can be expanded across modalities. Fast-mapping techniques are used to rapidly establish new auditory-visual relationships between acoustic cues and multiple arbitrary visual stimuli. Collectively, this research illustrates complex cross-modal performances in a highly experienced subject and provides insight into how animals organize information from multiple sensory modalities into meaningful representations.

  13. Visual Data Mining of Robot Performance Data Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design and develop VDM/RP, a visual data mining system that will enable analysts to acquire, store, query, analyze, and visualize recent and historical...

  14. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    Science.gov (United States)

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  15. How visual search relates to visual diagnostic performance : a narrative systematic review of eye-tracking research in radiology

    NARCIS (Netherlands)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; ten Cate, Olle

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review

  16. A Spatial Lattice Model Applied for Meteorological Visualization and Analysis

    Directory of Open Access Journals (Sweden)

    Mingyue Lu

    2017-03-01

    studies are conducted by (1 performing a visualization of radar data that is used to describe the reflectivity factor of a raindrop and the pressure field information acquired from the National Centers for Environmental Prediction (NCEP, and (2 taking cutting analysis as another example where advanced meteorological analysis is performed. The results show that the proposed spatial lattice model can contribute to the feasible and effective analysis of meteorological information.

  17. UV-blocking spectacle lens protects against UV-induced decline of visual performance.

    Science.gov (United States)

    Liou, Jyh-Cheng; Teng, Mei-Ching; Tsai, Yun-Shan; Lin, En-Chieh; Chen, Bo-Yie

    2015-01-01

    Excessive exposure to sunlight may be a risk factor for ocular diseases and reduced visual performance. This study was designed to examine the ability of an ultraviolet (UV)-blocking spectacle lens to prevent visual acuity decline and ocular surface disorders in a mouse model of UVB-induced photokeratitis. Mice were divided into 4 groups (10 mice per group): (1) a blank control group (no exposure to UV radiation), (2) a UVB/no lens group (mice exposed to UVB rays, but without lens protection), (3) a UVB/UV400 group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [UV400 coating]), and (4) a UVB/photochromic group (mice exposed to UVB rays and protected using the CR-39™ spectacle lens [photochromic coating]). We investigated UVB-induced changes in visual acuity and in corneal smoothness, opacity, and lissamine green staining. We also evaluated the correlation between visual acuity decline and changes to the corneal surface parameters. Tissue sections were prepared and stained immunohistochemically to evaluate the structural integrity of the cornea and conjunctiva. In blank controls, the cornea remained undamaged, whereas in UVB-exposed mice, the corneal surface was disrupted; this disruption significantly correlated with a concomitant decline in visual acuity. Both the UVB/UV400 and UVB/photochromic groups had sharper visual acuity and a healthier corneal surface than the UVB/no lens group. Eyes in both protected groups also showed better corneal and conjunctival structural integrity than unprotected eyes. Furthermore, there were fewer apoptotic cells and less polymorphonuclear leukocyte infiltration in corneas protected by the spectacle lenses. The model established herein reliably determines the protective effect of UV-blocking ophthalmic biomaterials, because the in vivo protection against UV-induced ocular damage and visual acuity decline was easily defined.

  18. Interactions of emotion and anxiety on visual working memory performance.

    Science.gov (United States)

    Berggren, Nick; Curtis, Hannah M; Derakshan, Nazanin

    2017-08-01

    It is a widely observed finding that emotion and anxiety interact; highly stressed or anxious individuals show robust attentional biases towards external negative information. More generally, research has suggested that exposure to threatening stimuli, as well as the experience of acute stress, also may impair top-down attentional control and working memory. In the current study, we investigated how the influence of emotion and anxiety may interact to influence working memory performance. Participants were required to encode the orientation of four simple shapes, eight, or four shapes while filtering out four other irrelevant shapes from memory. Before memory displays, an irrelevant neutral or fearful face cue also was presented. Memory performance was found to interact with self-reported state anxiety and cue valence; on neutral cue trials, state anxiety was negatively correlated with performance. This effect was absent following a fear cue. In addition, filtering efficiency was negatively associated with state anxiety solely following a fear cue. Our findings suggest that state anxiety's influence to visual working memory can be strongly modulated by external signals to threat. Most crucially, rather than anxious individuals having greater difficulty rejecting external threatening information, we observed that external threat may in its own right generally impair filtering efficiency in anxious individuals.

  19. A Model Performance

    Science.gov (United States)

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  20. Testing Neural Models of the Development of Infant Visual Attention

    OpenAIRE

    Richards, John E.; Hunter, Sharon K.

    2002-01-01

    Several models of the development of infant visual attention have used information about neural development. Most of these models have been based on nonhuman animal studies and have relied on indirect measures of neural development in human infants. This article discusses methods for studying a “neurodevelopmental” model of infant visual attention using indirect and direct measures of cortical activity. We concentrate on the effect of attention on eye movement control and show how animal-base...

  1. PROPER: Performance visualization for optimizing and comparing ranking classifiers in MATLAB.

    Science.gov (United States)

    Jahandideh, Samad; Sharifi, Fatemeh; Jaroszewski, Lukasz; Godzik, Adam

    2015-01-01

    One of the recent challenges of computational biology is development of new algorithms, tools and software to facilitate predictive modeling of big data generated by high-throughput technologies in biomedical research. To meet these demands we developed PROPER - a package for visual evaluation of ranking classifiers for biological big data mining studies in the MATLAB environment. PROPER is an efficient tool for optimization and comparison of ranking classifiers, providing over 20 different two- and three-dimensional performance curves.

  2. An efficient visual saliency detection model based on Ripplet transform

    Indian Academy of Sciences (India)

    A Diana Andrushia

    Abstract. Even though there have been great advancements in computer vision tasks, the development of human visual attention models is still not well investigated. In day-to-day life, one can find ample applications of saliency detection in image and video processing. This paper presents an efficient visual saliency ...

  3. Modeling the shape hierarchy for visually guided grasping

    CSIR Research Space (South Africa)

    Rezai, O

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient...

  4. Mental models, visual reasoning and interaction in information visualization: a top-down perspective.

    Science.gov (United States)

    Liu, Zhicheng; Stasko, John T

    2010-01-01

    Although previous research has suggested that examining the interplay between internal and external representations can benefit our understanding of the role of information visualization (InfoVis) in human cognitive activities, there has been little work detailing the nature of internal representations, the relationship between internal and external representations and how interaction is related to these representations. In this paper, we identify and illustrate a specific kind of internal representation, mental models, and outline the high-level relationships between mental models and external visualizations. We present a top-down perspective of reasoning as model construction and simulation, and discuss the role of visualization in model based reasoning. From this perspective, interaction can be understood as active modeling for three primary purposes: external anchoring, information foraging, and cognitive offloading. Finally we discuss the implications of our approach for design, evaluation and theory development.

  5. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    Science.gov (United States)

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  6. A novel visualization model for web search results.

    Science.gov (United States)

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  7. Terminology model discovery using natural language processing and visualization techniques.

    Science.gov (United States)

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  8. Inter- and Intramodal Encoding of Auditory and Visual Presentation of Material: Effects on Memory Performance

    National Research Council Canada - National Science Library

    De Haan, Edward H. F; Appels, Bregje; Aleman, André; Postma, Albert

    2000-01-01

    ... using visual and auditory presentation and writing and vocalization as encoding activities. The results show a similar memory performance in all conditions apart from the one in which visually presented words had to be written down...

  9. Eye movement feedback fails to improve visual search performance.

    Science.gov (United States)

    Peltier, Chad; Becker, Mark W

    2017-01-01

    Many real-world searches (e.g., radiology and baggage screening) have rare targets. When targets are rare, observers perform rapid, incomplete searches, leading to higher miss rates. To improve search for rare (10% prevalence) targets, we provided eye movement feedback (EMF) to observers during their searches. Although the nature of the EMF varied across experiments, each method informed observers about the regions of the display that had not yet been inspected. We hypothesized that feedback would help guide attention to unsearched areas and increase the proportion of the display searched before making a target-absent response, thereby increasing accuracy. An eye tracker was used to mark fixated areas by either removing a semiopaque gray overlay (Experiments 1 and 4) as portions of the display were fixated or by adding the overlay once the eye left a segment of the image (Experiments 2 and 4). Experiment 3 provided automated EMF, such that a new region was uncovered every 540 milliseconds. Across experiments, we varied whether people searched for "Waldo" in images from "Where's Waldo?" search books or searched for a T among offset Ls. We found weak evidence that EMF improves accuracy in Experiment 1. However, in the remaining experiments, EMF had no effect (Experiment 4), or even reduced accuracy (Experiments 2 and 3). We conclude that the one positive result we found is likely a Type I error and that the EMF method that we used is unlikely to improve visual search performance.

  10. Data-driven approach to dynamic visual attention modelling

    Science.gov (United States)

    Culibrk, Dubravko; Sladojevic, Srdjan; Riche, Nicolas; Mancas, Matei; Crnojevic, Vladimir

    2012-06-01

    Visual attention deployment mechanisms allow the Human Visual System to cope with an overwhelming amount of visual data by dedicating most of the processing power to objects of interest. The ability to automatically detect areas of the visual scene that will be attended to by humans is of interest for a large number of applications, from video coding, video quality assessment to scene understanding. Due to this fact, visual saliency (bottom-up attention) models have generated significant scientific interest in recent years. Most recent work in this area deals with dynamic models of attention that deal with moving stimuli (videos) instead of traditionally used still images. Visual saliency models are usually evaluated against ground-truth eye-tracking data collected from human subjects. However, there are precious few recently published approaches that try to learn saliency from eyetracking data and, to the best of our knowledge, no approaches that try to do so when dynamic saliency is concerned. The paper attempts to fill this gap and describes an approach to data-driven dynamic saliency model learning. A framework is proposed that enables the use of eye-tracking data to train an arbitrary machine learning algorithm, using arbitrary features derived from the scene. We evaluate the methodology using features from a state-of-the art dynamic saliency model and show how simple machine learning algorithms can be trained to distinguish between visually salient and non-salient parts of the scene.

  11. Robust visual multitask tracking via composite sparse model

    Science.gov (United States)

    Jin, Bo; Jing, Zhongliang; Wang, Meng; Pan, Han

    2014-11-01

    Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L1,q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L and L1,1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers.

  12. Visual modeling in an analysis of multidimensional data

    Science.gov (United States)

    Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.

    2018-01-01

    The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.

  13. 3D Building Evacuation Route Modelling and Visualization

    Science.gov (United States)

    Chan, W.; Armenakis, C.

    2014-11-01

    The most common building evacuation approach currently applied is to have evacuation routes planned prior to these emergency events. These routes are usually the shortest and most practical path from each building room to the closest exit. The problem with this approach is that it is not adaptive. It is not responsively configurable relative to the type, intensity, or location of the emergency risk. Moreover, it does not provide any information to the affected persons or to the emergency responders while not allowing for the review of simulated hazard scenarios and alternative evacuation routes. In this paper we address two main tasks. The first is the modelling of the spatial risk caused by a hazardous event leading to choosing the optimal evacuation route for a set of options. The second is to generate a 3D visual representation of the model output. A multicriteria decision making (MCDM) approach is used to model the risk aiming at finding the optimal evacuation route. This is achieved by using the analytical hierarchy process (AHP) on the criteria describing the different alternative evacuation routes. The best route is then chosen to be the alternative with the least cost. The 3D visual representation of the model displays the building, the surrounding environment, the evacuee's location, the hazard location, the risk areas and the optimal evacuation pathway to the target safety location. The work has been performed using ESRI's ArcGIS. Using the developed models, the user can input the location of the hazard and the location of the evacuee. The system then determines the optimum evacuation route and displays it in 3D.

  14. How do People Make Sense of Unfamiliar Visualizations?: A Grounded Model of Novice's Information Visualization Sensemaking.

    Science.gov (United States)

    Lee, Sukwon; Kim, Sung-Hee; Hung, Ya-Hsin; Lam, Heidi; Kang, Youn-ah; Yi, Ji Soo

    2016-01-01

    In this paper, we would like to investigate how people make sense of unfamiliar information visualizations. In order to achieve the research goal, we conducted a qualitative study by observing 13 participants when they endeavored to make sense of three unfamiliar visualizations (i.e., a parallel-coordinates plot, a chord diagram, and a treemap) that they encountered for the first time. We collected data including audio/video record of think-aloud sessions and semi-structured interview; and analyzed the data using the grounded theory method. The primary result of this study is a grounded model of NOvice's information Vlsualization Sensemaking (NOVIS model), which consists of the five major cognitive activities: 1 encountering visualization, 2 constructing a frame, 3 exploring visualization, 4 questioning the frame, and 5 floundering on visualization. We introduce the NOVIS model by explaining the five activities with representative quotes from our participants. We also explore the dynamics in the model. Lastly, we compare with other existing models and share further research directions that arose from our observations.

  15. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-11-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  16. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  17. An Enhancement of Visual Test Performance for Nuclear Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Choi, Young Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Jung Cheol [Korea Nuclear Fuel, Daejeon (Korea, Republic of)

    2009-05-15

    In the overhaul period of the nuclear power plant, integrity of the neutron-irradiated fuel assembly is evaluated. Nuclear regulations require that nuclear power plants meet the design, operation, and inspection requirements of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (B and PV). Section XI of the ASME B and PV Code provides the specific requirements for inspecting the systems, structures, and components; Section V of the ASME Code provides requirements for inspection methods, including volumetric (e.g., ultrasonic testing), surface (e.g., eddy current testing), and visual testing (VT). Visual testing of neutron irradiated fuel assembly is conducted generally for a variety of purposes, for example to detect discontinuities and imperfections on the surface of fuel rods, to detect evidence of leakage from end-cap welds, and to determine the general mechanical and structural condition of one. VT is performed remotely using video camera. As the neutron-irradiated fuel assembly is a high dose-rate gamma-ray source, approximately a few kGy, radiation hardened underwater camera is used in the VT of the fuel assembly. Utilities today follow the EPRI guidelines for VT-1 tests on nuclear components (BWR Vessel and Internals Project-3 1995). The VT-1 guidelines specify which areas around a weld should be examined, how to measure the sizes of indications found, and how to test the resolving power of the visual equipment used for the test. The EPRI guidelines use two 12{mu}m (0.0005-in.) wires or notches as a resolution calibration standard. According to the EPRI guidelines (BWRVIP-03 1995), the camera systems employed were marginally able to detect the 0.0005-inch (12-{mu}m) diameter wire on a steel background. In the some future, it is required that the VT of nuclear fuel assembly follows the EPRI VT-1 guideline. In order to meet the VT-1 guideline, any system used in VT (ranging from the naked eye to a digital closed-circuit TV

  18. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  19. Sociocultural Factors and Bender Visual Motor Gestalt Performance.

    Science.gov (United States)

    Sapp, Gary L.

    The Bender Visual Motor Gestalt Test (BG), a test of visual-motor integration, is a screening device used to investigate school-related factors that may produce poor academic achievement and learning disabilities. Because BG test stimuli are not obviously related to classroom content, and because BG scores are frequently offered as evidence of…

  20. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  1. VCMM: a visual tool for continuum molecular modeling.

    Science.gov (United States)

    Bai, Shiyang; Lu, Benzhuo

    2014-05-01

    This paper describes the design and function of a visualization tool, VCMM, for visualizing and analyzing data, and interfacing solvers for generic continuum molecular modeling. In particular, an emphasis of the program is to treat the data set based on unstructured mesh as used in finite/boundary element simulations, which largely enhances the capabilities of current visualization tools in this area that only support structured mesh. VCMM is segmented into molecular, meshing and numerical modules. The capabilities of molecular module include molecular visualization and force field assignment. Meshing module contains mesh generation, analysis and visualization tools. Numerical module currently provides a few finite/boundary element solvers of continuum molecular modeling, and contains several common visualization tools for the numerical result such as line and plane interpolations, surface probing, volume rendering and stream rendering. Three modules can exchange data with each other and carry out a complete process of modeling. Interfaces are also designed in order to facilitate usage of other mesh generation tools and numerical solvers. We develop a technique to accelerate data retrieval and have combined many graphical techniques in visualization. VCMM is highly extensible, and users can obtain more powerful functions by introducing relevant plug-ins. VCMM can also be useful in other fields such as computational quantum chemistry, image processing, and material science. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  3. Planetary subsurface investigation by 3D visualization model .

    Science.gov (United States)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  4. Musicians’ Online Performance during Auditory and Visual Statistical Learning Tasks

    Science.gov (United States)

    Mandikal Vasuki, Pragati R.; Sharma, Mridula; Ibrahim, Ronny K.; Arciuli, Joanne

    2017-01-01

    Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain. PMID:28352223

  5. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Jones, M. Gail

    2018-01-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service…

  6. A Dynamic Systems Theory Model of Visual Perception Development

    Science.gov (United States)

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  7. The Investigation of Physical Performance Status of Visually and Hearing Impaired Applying Judo Training Program

    Science.gov (United States)

    Karakoc, Onder

    2016-01-01

    It was aimed to investigate the physical performances of visually and hearing impaired doing judo training in this study. 32 male athletes, who were doing judo training, volunteer and, visually and hearing impaired, participated in this study. The investigation was applied to visually impaired (N = 12, mean ± SD; age: 25.75 ± 3.55 years, height:…

  8. Visual Middle-Out Modeling of Problem Spaces

    DEFF Research Database (Denmark)

    Valente, Andrea

    2009-01-01

    Modeling is a complex and central activity in many domains. Domain experts and designers usually work by drawing and create models from the middle-out; however, visual and middle-out style modeling is poorly supported by software tools. In order to define a new class of software-based modeling...... tools, we propose a scenario and identify some requirements. Those requirements are contrasted against features of existing tools from various application domains, and the results show general lack of support for custom visualization and incremental knowledge specification, poor handling of temporal...

  9. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  10. A feedback model of visual attention

    OpenAIRE

    Spratling, M. W.; Johnson, M H

    2004-01-01

    Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a v...

  11. Cognitive performance in visual memory and attention are influenced by many factors

    DEFF Research Database (Denmark)

    Wilms, Inge Linda; Nielsen, Simon

    Visual perception serves as the basis for much of the higher level cognitive processing as well as human activity in general. Here we present normative estimates for the following components of visual perception: the visual perceptual threshold, the visual short-term memory capacity and the visual...... perceptual encoding/decoding speed (processing speed) of visual short-term memory based on an assessment of 94 healthy subjects aged 60-75. The estimates are presented at total sample level as well as at gender level. The estimates were modelled from input from a whole-report assessment based on A Theory...... speed of Visual Short-term Memory (VTSM) but not the capacity of VSTM nor the visual threshold. The estimates will be useful for future studies into the effects of various types of intervention and training on cognition in general and visual attention in particular. (...

  12. Association between multifocal soft contact lens decentration and visual performance

    Directory of Open Access Journals (Sweden)

    Fedtke C

    2016-06-01

    Full Text Available Cathleen Fedtke,1 Klaus Ehrmann,1,2 Varghese Thomas,1 Ravi C Bakaraju1,2 1The Brien Holden Vision Institute, Clinical Trial Research Centre, 2School of Optometry and Vision Science, The University of New South Wales, Sydney, NSW, Australia Purpose: The aim of this study was to assess the association between decentration of several commercial multifocal soft contact lenses (MFCLs and various objective and subjective visual performance variables in presbyopic and non-presbyopic participants. Materials and methods: All presbyopic (age >40 years, near add ≥+1.25 D and non-presbyopic (age ≥18 years, no near add requirements, spherical equivalent ≤-0.50 D participants were each fitted bilaterally with six and two MFCLs (test lens, respectively, and with one single vision lens (control lens. Lens decentration, ie, the x- and y-differences between the contact lens and pupil centers, was objectively determined. Third-order aberrations were measured and compared. Visual performance (high- and low-contrast acuities and several subjective variables was analyzed for any associations (Pearson’s correlation, r with MFCL decentration. Results: A total of 17 presbyopic (55.1±6.9 years and eight non-presbyopic (31.0±3.3 years participants completed the study. All lenses displayed a temporal–inferior decentration (x=-0.36±0.29 mm, y=-0.28±0.28 mm, mean ± SD. Compared to the control, a significant inferior decentration was found for the Proclear® MFCL Near lens in both groups (ypresbyopic =-0.26 mm, ynon-presbyopic =-0.70 mm and for the Proclear® MFCL Distance lens in the non-presbyopic group (ynon-presbyopic =-0.69 mm. In both groups, lens-induced vertical coma (C(3, -1 was, by at least tenfold, significantly more positive for the Proclear® MFCL Distance lens and significantly more negative for the Proclear® MFCL Near lens. In the presbyopic group, the correlation of total MFCL decentration with vision variables was weak (r<|0

  13. Attention, Visual Perception and their Relationship to Sport Performance in Fencing.

    Science.gov (United States)

    Hijazi, Mona Mohamed Kamal

    2013-12-18

    Attention and visual perception are important in fencing, as they affect the levels of performance and achievement in fencers. This study identifies the levels of attention and visual perception among male and female fencers and the relationship between attention and visual perception dimensions and the sport performance in fencing. The researcher employed a descriptive method in a sample of 16 fencers during the 2010/2011 season. The sample was comprised of eight males and eight females who participated in the 11-year stage of the Cairo Championships. The Test of Attentional and Interpersonal Style, which was designed by Nideffer and translated by Allawi (1998) was applied. The test consisted of 59 statements that measured seven dimensions. The Test of Visual Perception Skills designed by Alsmadune (2005), which includes seven dimensions was also used. Among females, a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual-Spatial Relationships, Visual Sequential Memory, Narrow Attentional Focus and Information Processing was observed, while among males, there was a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional Focus and Information Processing. For both males and females, a positive and statistically significant correlation between achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional, Narrow Attentional Focus and Information Processing was found. There were statistically significant differences between males and females in Visual Discrimination and Visual-Form Constancy.

  14. Modeling and visualizing borehole information on virtual globes using KML

    Science.gov (United States)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  15. Human Factors Assessment of Vibration Effects on Visual Performance During Launch

    Science.gov (United States)

    Holden, Kritina

    2009-01-01

    The Human Factors Assessment of Vibration Effects on Visual Performance During Launch (Visual Performance) investigation will determine visual performance limits during operational vibration and g-loads on the Space Shuttle, specifically through the determination of minimum readable font size during ascent using planned Orion display formats. Research Summary: The aim of the Human Factors Assessment of Vibration Effects on Visual Performance during Launch (Visual Performance) investigation is to provide supplementary data to that collected by the Thrust Oscillation Seat Detailed Technical Objective (DTO) 695 (Crew Seat DTO) which will measure seat acceleration and vibration from one flight deck and two middeck seats during ascent. While the Crew Seat DTO data alone are important in terms of providing a measure of vibration and g-loading, human performance data are required to fully interpret the operational consequences of the vibration values collected during Space Shuttle ascent. During launch, crewmembers will be requested to view placards with varying font sizes and indicate the minimum readable size. In combination with the Crew Seat DTO, the Visual Performance investigation will: Provide flight-validated evidence that will be used to establish vibration limits for visual performance during combined vibration and linear g-loading. o Provide flight data as inputs to ongoing ground-based simulations, which will further validate crew visual performance under vibration loading in a controlled environment. o Provide vibration and performance metrics to help validate procedures for ground tests and analyses of seats, suits, displays and controls, and human-in-the-loop performance.

  16. The Theory of Visual Attention without the race: a new model of visual selection

    DEFF Research Database (Denmark)

    Andersen, Tobias; Kyllingsbæk, Søren

    2012-01-01

    constrained by a limited processing capacity or rate, which is distributed among target and distractor objects with distractor objects receiving a smaller proportion of resources due to attentional filtering. Encoding into a limited visual short-term memory is implemented as a race model. Given its major...

  17. Visually-salient contour detection using a V1 neural model with horizontal connections

    CERN Document Server

    Loxley, P N

    2011-01-01

    A convolution model which accounts for neural activity dynamics in the primary visual cortex is derived and used to detect visually salient contours in images. Image inputs to the model are modulated by long-range horizontal connections, allowing contextual effects in the image to determine visual saliency, i.e. line segments arranged in a closed contour elicit a larger neural response than line segments forming background clutter. The model is tested on 3 types of contour, including a line, a circular closed contour, and a non-circular closed contour. Using a modified association field to describe horizontal connections the model is found to perform well for different parameter values. For each type of contour a different facilitation mechanism is found. Operating as a feed-forward network, the model assigns saliency by increasing the neural activity of line segments facilitated by the horizontal connections. Alternatively, operating as a feedback network, the model can achieve further improvement over sever...

  18. Brain activation during visual working memory correlates with behavioral mobility performance in older adults

    Directory of Open Access Journals (Sweden)

    Toshikazu eKawagoe

    2015-09-01

    Full Text Available Functional mobility and cognitive function often decline with age. We previously found that functional mobility as measured by the Timed Up and Go Test (TUG was associated with cognitive performance for visually-encoded (i.e. for location and face working memory (WM in older adults. This suggests a common neural basis between TUG and visual WM. To elucidate this relationship further, the present study aimed to examine the neural basis for the WM-mobility association. In accordance with the well-known neural compensation model in aging, we hypothesized that attentional brain activation for easy WM would increase in participants with lower mobility. The data from 32 healthy older adults were analyzed, including brain activation during easy WM tasks via functional Magnetic Resonance Imaging and mobility performance via both TUG and a simple walking test. WM performance was significantly correlated with TUG but not with simple walking. Some prefrontal brain activations during WM were negatively correlated with TUG performance, while positive correlations were found in subcortical structures including the thalamus, putamen and cerebellum. Moreover, activation of the subcortical regions was significantly correlated with WM performance, with less activation for lower WM performers. These results indicate that older adults with lower mobility used more cortical (frontal and fewer subcortical resources for easy WM tasks. To date, the frontal compensation has been proposed separately in the motor and cognitive domains, which have been assumed to compensate for dysfunction of the other brain areas; however, such dysfunction was less clear in previous studies.The present study observed such dysfunction as degraded activation associated with lower performance, which was found in the subcortical regions. We conclude that a common dysfunction -compensation activation pattern is likely the neural basis for the association between visual WM and functional

  19. Statistical modeling of program performance

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available A task of evaluation of program performance often occurs in the process of design of computer systems or during iterative compilation. A traditional way to solve this problem is emulation of program execution on the target system. A modern alternative approach to evaluation of program performance is based on statistical modeling of program performance on a computer under investigation. This statistical method of modeling program performance called Velocitas was introduced in this work. The method and its implementation in the Adaptor framework were presented. Investigation of the method's effectiveness showed high adequacy of program performance prediction.

  20. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular within the neuroimaging community. Such methods attempt...... to predict or decode experimentally defined cognitive states based on brain scans. The topics covered in the dissertation are divided into two broad parts: The first part investigates the relative importance of model selection on the brain patterns extracted form analysis models. Typical neuroimaging data...... sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative influence...

  1. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    2012-01-01

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular analysis tools within the neuroimaging community. Such methods...... attempt to predict or decode experimentally defined cognitive states based on brain scans. The topics covered in the dissertation are divided into two broad parts: The first part investigates the relative importance of model selection on the brain patterns extracted form analysis models. Typical...... neuroimaging data sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative...

  2. A model of visual, aesthetic communication focusing on web sites

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2002-01-01

    design. With a point of departure in Roman Jakobson's linguistic communication model, the reader is introduced to a model which covers the communication aspects, the visual aspects, the aesthetic aspects and the net specific aspects of the analysis of media products. The aesthetic aspects rank low......Theory books and method books within the field of web design mainly focus on the technical and functional aspects of the construction of web design. There is a lack of a model which weighs the analysis of the visual and aesthetic aspects against the the functional and technical aspects of web...

  3. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  4. High-speed visual feedback for realizing high-performance robotic manipulation

    Science.gov (United States)

    Huang, S.; Bergström, N.; Yamakawa, Y.; Senoo, T.; Ishikawa, M.

    2017-02-01

    High-speed vision sensing becomes a driving factor in developing new methods for robotic manipulation. In this paper we present two such methods in order to realize high-performance manipulation. First, we present a dynamic compensation approach which aims to achieve simultaneously fast and accurate positioning under various (from system to external environment) uncertainties. Second, a high-speed motion strategy for manipulating flexible objects is introduced to address the issue of deformation uncertainties. Both methods rely on high-speed visual feedback and are model independent, which we believe is essential to ensure good flexibility in a wide range of applications. The high-speed visual feedback tracks the relative error between the working tool and the target in image coordinates, which implies that there is no need for accurate calibrations of the vision system. Tasks for validating these methods were implemented and experimental results were provided to illustrate the effectiveness of the proposed methods.

  5. Task-specific visual cues for improving process model understanding

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2016-01-01

    Context Business process models support various stakeholders in managing business processes and designing process-aware information systems. In order to make effective use of these models, they have to be readily understandable. Objective Prior research has emphasized the potential of visual cues to

  6. Dynamic peripheral visual performance relates to alpha activity in soccer players

    Science.gov (United States)

    Nan, Wenya; Migotina, Daria; Wan, Feng; Lou, Chin Ian; Rodrigues, João; Semedo, João; Vai, Mang I; Pereira, Jose Gomes; Melicio, Fernando; Da Rosa, Agostinho C.

    2014-01-01

    Many studies have demonstrated the relationship between the alpha activity and the central visual ability, in which the visual ability is usually assessed through static stimuli. Besides static circumstance, however in the real environment there are often dynamic changes and the peripheral visual ability in a dynamic environment (i.e., dynamic peripheral visual ability) is important for all people. So far, no work has reported whether there is a relationship between the dynamic peripheral visual ability and the alpha activity. Thus, the objective of this study was to investigate their relationship. Sixty-two soccer players performed a newly designed peripheral vision task in which the visual stimuli were dynamic, while their EEG signals were recorded from Cz, O1, and O2 locations. The relationship between the dynamic peripheral visual performance and the alpha activity was examined by the percentage-bend correlation test. The results indicated no significant correlation between the dynamic peripheral visual performance and the alpha amplitudes in the eyes-open and eyes-closed resting condition. However, it was not the case for the alpha activity during the peripheral vision task: the dynamic peripheral visual performance showed significant positive inter-individual correlations with the amplitudes in the alpha band (8–12 Hz) and the individual alpha band (IAB) during the peripheral vision task. A potential application of this finding is to improve the dynamic peripheral visual performance by up-regulating alpha activity using neuromodulation techniques. PMID:25426058

  7. Reality monitoring performance and the role of visual imagery in visual hallucinations.

    Science.gov (United States)

    Aynsworth, Charlotte; Nemat, Nazik; Collerton, Daniel; Smailes, David; Dudley, Robert

    2017-10-01

    Auditory Hallucinations may arise from people confusing their own inner speech with external spoken speech. People with visual hallucinations (VH) may similarly confuse vivid mental imagery with external events. This paper reports two experiments exploring confusion between internal and external visual material. Experiment 1 examined reality monitoring in people with psychosis; those with visual hallucinations (n = 16) and those without (n = 15). Experiment 2 used two non-clinical groups of people with high or low predisposition to VH (HVH, n = 26, LVH, n = 21). All participants completed the same reality monitoring task. Participants in Experiment 2 also completed measures of imagery. Psychosis patients with VH demonstrated biased reality monitoring, where they misremembered items that had been presented as words as having been presented as pictures. Patients without VH did not show this bias. In Experiment 2, the HVH group demonstrated the same bias in reality monitoring that psychosis patients with VH had shown. The LVH group did not show this bias. In addition, the HVH group reported more vivid imagery and particularly more negative imagery. Both studies found that people with visual hallucinations or prone-ness to such experiences confused their inner visual experiences with external images. Vivid imagery was also related to proneness to VH. Hence, vivid imagery and reality monitoring confusion could be contributory factors to understanding VH. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Visualizations and Mental Models - The Educational Implications of GEOWALL

    Science.gov (United States)

    Rapp, D.; Kendeou, P.

    2003-12-01

    Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an

  9. MODELLING SYNERGISTIC EYE MOVEMENTS IN THE VISUAL FIELD

    Directory of Open Access Journals (Sweden)

    BARITZ Mihaela

    2015-06-01

    Full Text Available Some theoretical and practical considerations about eye movements in visual field are presented in the first part of this paper. These movements are developed into human body to be synergistic and are allowed to obtain the visual perception in 3D space. The theoretical background of the eye movements’ analysis is founded on the establishment of movement equations of the eyeball, as they consider it a solid body with a fixed point. The exterior actions, the order and execution of the movements are ensured by the neural and muscular external system and thus the position, stability and movements of the eye can be quantified through the method of reverse kinematic. The purpose of these researches is the development of a simulation model of human binocular visual system, an acquisition methodology and an experimental setup for data processing and recording regarding the eye movements, presented in the second part of the paper. The modeling system of ocular movements aims to establish the binocular synergy and limits of visual field changes in condition of ocular motor dysfunctions. By biomechanical movements of eyeball is established a modeling strategy for different sort of processes parameters like convergence, fixation and eye lens accommodation to obtain responses from binocular balance. The results of modelling processes and the positions of eye ball and axis in visual field are presented in the final part of the paper.

  10. Visual performance in medical imaging using liquid crystal displays

    Science.gov (United States)

    Tchou, Philip Marcel

    2007-12-01

    This thesis examined the contrast performance of liquid crystal display (LCD) devices for use in medical imaging. Novel experimental methods were used to measure the ability of medical LCD devices to produce just noticeable contrast. It was demonstrated that medical LCD devices are capable of high performance in medical imaging and are suitable for conducting psychovisual research experiments. Novel methods for measuring and controlling the luminance response of an LCD were presented in Chapter 3 and used to develop a software tools to apply DICOM GSDF calibrations. Several medical LCD systems were calibrated, demonstrating that the methods can be used to reliably measure luminance and manipulate fine contrast. Chapter 4 reports on a novel method to generate low contrast bi-level bar patterns by using the full palette of available gray values. The method was used in a two alternative forced choice (2AFC) psychovisual experiment to measure the contrast threshold of human observers. Using a z-score analysis method, the results were found to be consistent with the Barten model of contrast sensitivity. Chapter 5 examined error distortion associated with using z-scores. A maximum likelihood estimation (MLE) method was presented as an alternative and was used to reevaluate the results from Chapter 4. The new results were consistent with the Barten model. Simulations were conducted to evaluate the statistical precision of the MLE method in relation to the number and distribution of trials. In Chapter 6, 2AFC tests were conducted examining contrast thresholds for complex sinusoid, white noise, and filtered noise patterns. The sinusoid test results were consistent with the Barten model while the noise patterns required more contrast for visibility. The effects of adaptation were also demonstrated. A noise visibility index (NVI) was introduced to describe noise power weighted by contrast sensitivity. Just noticeable white and filtered noise patterns exhibited similar NVI

  11. Self-paced model learning for robust visual tracking

    Science.gov (United States)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  12. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  13. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  14. Firm Sustainability Performance Index Modeling

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah Bt Wan Mohamed Radzi

    2015-12-01

    Full Text Available The main objective of this paper is to bring a model for firm sustainability performance index by applying both classical and Bayesian structural equation modeling (parametric and semi-parametric modeling. Both techniques are considered to the research data collected based on a survey directed to the China, Taiwan, and Malaysia food manufacturing industry. For estimating firm sustainability performance index we consider three main indicators include knowledge management, organizational learning, and business strategy. Based on the both Bayesian and classical methodology, we confirmed that knowledge management and business strategy have significant impact on firm sustainability performance index.

  15. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  16. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  17. Change and Stability: Examining the Macrostructures of Doctoral Theses in the Visual and Performing Arts

    Science.gov (United States)

    Paltridge, Brian; Starfield, Sue; Ravelli, Louise J.; Tuckwell, Kathryn

    2012-01-01

    This article describes an investigation into the practice-based doctorate in the visual and performing arts, a genre that is still in the process of development. A key feature of these doctorates is that they comprise two components: a visual or performance component, and a written text which accompanies it which in some ways is similar to, but in…

  18. Mapping Disciplinary Values and Rhetorical Concerns through Language: Writing Instruction in the Performing and Visual Arts

    Science.gov (United States)

    Cox, Anicca

    2015-01-01

    Via interview data focused on instructor practices and values, this study sought to describe some of what performing and visual arts instructors do at the university level to effectively teach disciplinary values through writing. The study's research goals explored how relationships to writing process in visual and performing arts support…

  19. Comparison of Reading Performance between Visually Impaired and Normally Sighted Students in Malaysia

    Science.gov (United States)

    Mohammed, Zainora; Omar, Rokiah

    2011-01-01

    The aim of this study is to compare reading performance between visually impaired and normally sighted school children. Participants (n = 299) were divided into three groups: normal vision (NV, n = 193), visually impaired print reader (PR, n = 52), and Braille reader (BR, n = 54). Reading performance was determined by measuring reading rate and…

  20. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  1. Visual Effects in the High Performance Aircraft Cockpit

    Science.gov (United States)

    1988-04-01

    mlm Ii ABSTRACT Visual is the key sensory mode by which a pilot receives the vast majority of the information required to successfully fly the...external light wedges, or bezels (see Figure 18), are sometimes mounted over the most important instruments. 7-15 BULB-FILTER ASSEMBLY DISTRIBUTOR U

  2. Temporal perceptual coding using a visual acuity model

    Science.gov (United States)

    Adzic, Velibor; Cohen, Robert A.; Vetro, Anthony

    2014-02-01

    This paper describes research and results in which a visual acuity (VA) model of the human visual system (HVS) is used to reduce the bitrate of coded video sequences, by eliminating the need to signal transform coefficients when their corresponding frequencies will not be detected by the HVS. The VA model is integrated into the state of the art HEVC HM codec. Compared to the unmodified codec, up to 45% bitrate savings are achieved while maintaining the same subjective quality of the video sequences. Encoding times are reduced as well.

  3. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    Science.gov (United States)

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  4. Combined visual and dribbling performance in young soccer players of different expertise.

    Science.gov (United States)

    Bekris, Evangelos; Gissis, Ioannis; Ispyrlidis, Ioannis; Mylonis, Eleftherios; Axeti, Georgia

    2018-01-01

    We aimed to evaluate dribbling performance in terms of technique and visual skills assessment of both young experienced (EX, n = 24) and novice (NO, n = 24) soccer players. Both groups performed two dribbling tests with four levels of difficulty in visual signals (A1-A4 and B1-B4; B - half distance of A; 1 - no visual signal; 4 - signal with the shorter flashing time). All players performed slower when visual signals were added to the testing process (~2.5 s; p visual mistakes was significantly lower for EX than NO in all tests (p visual stimuli in young soccer EX and NO players.

  5. An Architectural Model of Visual Motion Understanding

    Science.gov (United States)

    1989-08-01

    research environments on earth; particularly Peggy Frantz, Jill Orioli Forster, and Peg Meeker, whose competence is matched only by their patience with...idea of competition for ownership of evidence, which is the essence of the feature binding, has occurred in a few other contexts. Levitt [ Levitt , 19861...Physiology, 250:347-366, 1985. [ Levitt , 1986] Tod S. Levitt , "Model-based Probabilistic Situation Inference in Hier- archical Hypothesis Spaces," In L

  6. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  7. Modeling DNA structure and processes through animation and kinesthetic visualizations

    Science.gov (United States)

    Hager, Christine

    There have been many studies regarding the effectiveness of visual aids that go beyond that of static illustrations. Many of these have been concentrated on the effectiveness of visual aids such as animations and models or even non-traditional visual aid activities like role-playing activities. This study focuses on the effectiveness of three different types of visual aids: models, animation, and a role-playing activity. Students used a modeling kit made of Styrofoam balls and toothpicks to construct nucleotides and then bond nucleotides together to form DNA. Next, students created their own animation to depict the processes of DNA replication, transcription, and translation. Finally, students worked in teams to build proteins while acting out the process of translation. Students were given a pre- and post-test that measured their knowledge and comprehension of the four topics mentioned above. Results show that there was a significant gain in the post-test scores when compared to the pre-test scores. This indicates that the incorporated visual aids were effective methods for teaching DNA structure and processes.

  8. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    ). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical......In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...

  9. Psychological Correlates of a Model of the Human Visual System

    Science.gov (United States)

    A model of the human visual system is investigated for psychological correlates. A priori hypotheses from the model concerned with human...identification of defocused letters as well as identification of rotated letters have been validated with the computer model. Gestalt principles of similarity...is also psychologically correlated. It is further postulated that the human perceptual space is the image domain from spatially filtered transforms of

  10. Towards utilizing GPUs in information visualization: a model and implementation of image-space operations.

    Science.gov (United States)

    McDonnel, Bryan; Elmqvist, Niklas

    2009-01-01

    Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques.

  11. Modeling the Visual and Linguistic Importance of Objects

    Directory of Open Access Journals (Sweden)

    Moreno Ignazio Coco

    2012-05-01

    Full Text Available Previous work measuring the visual importance of objects has shown that only spatial information, such as object position and size, is predictive of importance, whilst low-level visual information, such as saliency, is not (Spain and Perona 2010, IJCV 91, 59–76. Objects are not important solely on the basis of their appearance. Rather, they are important because of their contextual information (eg, a pen in an office versus in a bathroom, which is needed in tasks requiring cognitive control (eg, visual search; Henderson 2007, PsySci 16 219–222. Given that most visual objects have a linguistic counterpart, their importance depends also on linguistic information, especially in tasks where language is actively involved—eg, naming. In an eye-tracking naming study, where participants are asked to name 5 objects in a scene, we investigated how visual saliency, contextual features, and linguistic information of the mentioned objects predicted their importance. We measured object importance based on the urn model of Spain and Perona (2010 and estimated the predictive role of visual and linguistic features using different regression frameworks: LARS (Efron et al 2004, Annals of Statistics 32 407–499 and LME (Baayen et al 2008, JML 59, 390–412. Our results confirmed the role of spatial information in predicting object importance, and in addition, we found effects of saliency. Crucially to our hypothesis, we demonstrated that the lexical frequency of objects and their contextual fit in the scene significantly contributed to object importance.

  12. Composite performance and dependability modelling

    NARCIS (Netherlands)

    Trivedi, Kishor S.; Muppala, Jogesh K.; Woolet, Steven P.; Haverkort, Boudewijn R.H.M.

    1992-01-01

    Composite performance and dependability analysis is gaining importance in the design of complex, fault-tolerant systems. Markov reward models are most commonly used for this purpose. In this paper, an introduction to Markov reward models including solution techniques and application examples is

  13. Visual performance of acrylic and PMMA intraocular lenses.

    Science.gov (United States)

    Gozum, N; Unal, E Safgonul; Altan-Yaycioglu, R; Gucukoglu, A; Ozgun, C

    2003-03-01

    To evaluate the quality of visual functions after cataract surgery and intraocular lens (IOL) implantation with different lens materials and compare the results with age-matched subjects with clear phakic eyes. Control and pseudophakic groups involved individuals aged between 50 and 75 years, without any accompanying ocular or systemic disease. In all, 50 eyes implanted with foldable acrylic IOLs, and 41 eyes implanted with polymethyl-methacrylate (PMMA) IOLs were compared with 45 phakic eyes as controls. Visual functions were evaluated for contrast sensitivity function and glare disability. The results were compared statistically using one-way analysis of variance (ANOVA). At high luminance levels, the difference among groups for contrast sensitivity was statistically significant for all spatial frequencies (P0.05). Glare disability scores were significantly higher in the PMMA-IOL group compared to the control and acrylic-IOL groups. The visual quality achieved in pseudophakic eyes was not as good as in clear phakic eyes in regard to contrast sensitivity and glare. However, acrylic IOLs fared better than PMMA IOLs.

  14. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  15. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    Directory of Open Access Journals (Sweden)

    Joyce eBonaccorsi

    2014-07-01

    Full Text Available Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e. the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  16. A blessing, not a curse: experimental evidence for beneficial effects of visual aesthetics on performance.

    Science.gov (United States)

    Moshagen, Morten; Musch, Jochen; Göritz, Anja S

    2009-10-01

    The present experiment investigated the effect of visual aesthetics on performance. A total of 257 volunteers completed a series of search tasks on a website providing health-related information. Four versions of the website were created by manipulating visual aesthetics (high vs. low) and usability (good vs. poor) in a 2 x 2 between-subjects design. Task completion times and error rates were used as performance measures. A main effect of usability on both error rates and completion time was observed. Additionally, a significant interaction of visual aesthetics and usability revealed that high aesthetics enhanced performance under conditions of poor usability. Thus, in contrast to the notion that visual aesthetics may worsen performance, visual aesthetics even compensated for poor usability by speeding up task completion. The practical and theoretical implications of this finding are discussed.

  17. Relationship between Visual Motor Integration and Academic Performance in Elementary School Children

    Directory of Open Access Journals (Sweden)

    KR Banumathe

    2017-05-01

    Full Text Available Objective: To assess the relationship between visual motor integration and academic performance in elementary school children. Method: A cross sectional study was undertaken on 208 children who were in second standard from government, government aided and private schools. The screening tools for excluding children with visual and auditory deficit, Attention Deficit Hyperactivity Disorder, Childhood psychiatric symptoms, learning disabilities, below average intelligence were administered. The primary measure of visual motor integration was obtained using Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI. The academic performance was calculated from the mean of all the subject marks scored in two consecutive exams and on teacher’s perception on academic performance using a 100-point rating scale. Results & Conclusion: Pearson product-moment correlation coefficient test was used to analyze the correlation. It has shown that there is weak positive correlation found between visual motor integration and academic performance which would recommend the need for longitudinal study.

  18. Robust Visual Tracking via Exclusive Context Modeling

    KAUST Repository

    Zhang, Tianzhu

    2015-02-09

    In this paper, we formulate particle filter-based object tracking as an exclusive sparse learning problem that exploits contextual information. To achieve this goal, we propose the context-aware exclusive sparse tracker (CEST) to model particle appearances as linear combinations of dictionary templates that are updated dynamically. Learning the representation of each particle is formulated as an exclusive sparse representation problem, where the overall dictionary is composed of multiple {group} dictionaries that can contain contextual information. With context, CEST is less prone to tracker drift. Interestingly, we show that the popular L₁ tracker [1] is a special case of our CEST formulation. The proposed learning problem is efficiently solved using an accelerated proximal gradient method that yields a sequence of closed form updates. To make the tracker much faster, we reduce the number of learning problems to be solved by using the dual problem to quickly and systematically rank and prune particles in each frame. We test our CEST tracker on challenging benchmark sequences that involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that CEST consistently outperforms state-of-the-art trackers.

  19. Modeling Visual Symptoms and Visual Skills to Measure Functional Binocular Vision

    Science.gov (United States)

    Powers, M. K.; Fisher, W. P., Jr.; Massof, R. W.

    2016-11-01

    Obtaining a clear image of the world depends on good eye coordination (“binocular vision”). Yet no standard exists by which to determine a threshold for good vs poor binocular vision, as exists for the eye chart and visual acuity. We asked whether data on the signs and symptoms related to binocular vision are sufficiently consistent with children's self-reported visual symptoms to substantiate a construct model of Functional Binocular Vision (FBV), and then whether that model can be used to aggregate clinical and survey observations into a meaningful diagnostic measure. Data on visual symptoms from 1,100 children attending school in Los Angeles were obtained using the Convergence Insufficiency Symptom Survey (CISS); and for more than 300 students in that sample, 35 additional measures were taken, including acuity, cover test near and far, near point of convergence, near point of accommodation, accommodative facility, vergence ranges, tracking ability, and oral reading fluency. A preliminary analysis of data from the 15-item, 5-category CISS and 15 clinical variables from 103 grade school students who reported convergence problems (CISS scores of 16 or higher) suggests that the clinical and survey observations will be optimally combined in a multidimensional model.

  20. Visualization of 3D Geological Models on Google Earth

    Science.gov (United States)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  1. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    Science.gov (United States)

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  2. Ophiucus: RDF-based visualization tool for health simulation models.

    Science.gov (United States)

    Sutcliffe, Andrew; Okhmatovskaia, Anya; Shaban-Nejad, Arash; Buckeridge, David

    2012-01-01

    Simulation modeling of population health is becoming increasingly popular for epidemiology research and public health policy-making. However, the acceptability of population health simulation models is inhibited by their complexity and the lack of established standards to describe these models. To address this issue, we propose Ophiuchus - an RDF (Resource Description Framework: http://www.w3.org/RDF/)-based visualization tool for generating interactive 2D diagrams of population health simulation models, which describe these models in an explicit and formal manner. We present the results of a preliminary system assessment and discuss current limitations of the system.

  3. The Effect of Modeling and Visualization Resources on Student Understanding of Physical Hydrology

    Science.gov (United States)

    Marshall, Jilll A.; Castillo, Adam J.; Cardenas, M. Bayani

    2015-01-01

    We investigated the effect of modeling and visualization resources on upper-division, undergraduate and graduate students' performance on an open-ended assessment of their understanding of physical hydrology. The students were enrolled in one of five sections of a physical hydrology course. In two of the sections, students completed homework…

  4. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique.

    Science.gov (United States)

    Higaki, Toru; Tatsugami, Fuminari; Fujioka, Chikako; Sakane, Hiroaki; Nakamura, Yuko; Baba, Yasutaka; Iida, Makoto; Awai, Kazuo

    2017-08-01

    This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1-6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT). Hybrid iterative reconstruction (hybrid IR) and model-based iterative reconstruction (MBIR) were performed for the image reconstruction.

  5. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique

    OpenAIRE

    Toru Higaki; Fuminari Tatsugami; Chikako Fujioka; Hiroaki Sakane; Yuko Nakamura; Yasutaka Baba; Makoto Iida; Kazuo Awai

    2017-01-01

    This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1?6?mm made by 3D printer was scanned using 320-row detector computed tomography (CT). Hybrid iterative reconstruction (hybrid IR) and model-based iterative reconstruction (MBIR) were performed for the image reconstruction.

  6. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique

    Directory of Open Access Journals (Sweden)

    Toru Higaki

    2017-08-01

    Full Text Available This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1–6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT. Hybrid iterative reconstruction (hybrid IR and model-based iterative reconstruction (MBIR were performed for the image reconstruction.

  7. Performance Modeling of Enterprise Grids

    Science.gov (United States)

    Hoffman, Doug L.; Apon, Amy; Dowdy, Larry; Lu, Baochuan; Hamm, Nathan; Ngo, Linh; Bui, Hung

    Modeling has long been recognized as an invaluable tool for predicting the performance behavior of computer systems. Modeling software, both commercial and open source, is widely used as a guide for the development of new systems and the upgrading of exiting ones. Tools such as queuing network models, stochastic Petri nets, and event driven simulation are in common use for stand-alone computer systems and networks. Unfortunately, no set of comprehensive tools exists for modeling complex distributed computing environments such as the ones found in emerging grid deployments. With the rapid advance of grid computing, the need for improved modeling tools specific to the grid environment has become evident. This chapter addresses concepts, methodologies, and tools that are useful when designing, implementing, and tuning the performance in grid and cluster environments

  8. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  9. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2017-07-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  10. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  11. Visual field differences in visual word recognition can emerge purely from perceptual learning: evidence from modeling Chinese character pronunciation.

    Science.gov (United States)

    Hsiao, Janet Hui-Wen

    2011-11-01

    In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Formation of 17-18 yrs age girl students’ visual performance by means of visual training at stage of adaptation to learning loads

    Directory of Open Access Journals (Sweden)

    Bondarenko S.V.

    2015-04-01

    Full Text Available Purpose: substantiation of health related training influence of basketball and volleyball elements on functional state of 1 st year students’ visual analyzers in period of adaptation to learning loads with expressed visual component. Material: in experiment 29 students of 17-18 year age without visual pathologies participated. Indicators of visual performance were determined by correction table of Tagayeva and processed by Weston methodic. Accommodative function was tested by method of mechanical proximetry. Results: the authors worked out and tested two programs of visual training. Influence of visual trainings on visual performance’s main components (quickness, quality, integral indicators was studied as well as eye’s accommodative function (by dynamic of position of the nearest point of clear vision. Conclusions: Application of visual trainings at physical education classes permits to improve indicators of visual analyzer’s performance as well as minimize negative influence of intensive learning loads on eye’ accommodative function.

  13. Visual Modelling of Data Warehousing Flows with UML Profiles

    Science.gov (United States)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  14. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  15. Comparative visual performance with monofocal and multifocal intraocular lenses

    Directory of Open Access Journals (Sweden)

    Gundersen KG

    2013-10-01

    Full Text Available Kjell Gunnar Gundersen,1,* Richard Potvin2,*1Privatsykehuset Haugesund, Haugesund, Norway; 2Science in Vision, Burleson, TX, USA *These authors contributed equally to this workBackground: To compare near, intermediate, and distance vision, and quality of vision using appropriate subjective questionnaires, when monofocal or apodized diffractive multifocal intraocular lenses (IOLs are binocularly implanted.Methods: Patients with different binocular IOLs implanted were recruited after surgery and had their visual acuity tested, and quality of vision evaluated, at a single diagnostic visit between 3 and 8 months after second-eye surgery. Lenses tested included an aspheric monofocal and two apodized diffractive multifocal IOLs with slightly different design parameters. A total of 94 patients were evaluated.Results: Subjects with the ReSTOR® +2.5 D IOL had better near and intermediate vision than those subjects with a monofocal IOL. Intermediate vision was similar to, and near vision slightly lower than, that of subjects with a ReSTOR® +3.0 D IOL implanted. The preferred reading distance was slightly farther out for the +2.5 D relative to the +3.0 D lens, and farthest for the monofocal. Visual acuity at the preferred reading distance was equal with the two multifocal IOLs and significantly worse with the monofocal IOL. Quality of vision measures were highest with the monofocal IOL and similar between the two multifocal IOLs.Conclusion: The data indicate that the ReSTOR +2.5 D IOL provided good intermediate and functional near vision for patients who did not want to accept a higher potential for visual disturbances associated with the ReSTOR +3.0 D IOL, but wanted more near vision than a monofocal IOL generally provides. Quality of vision was not significantly different between the multifocal IOLs, but patient self-selection for each lens type may have been a factor.Keywords: multifocal IOL, near vision, cataract, presbyopia

  16. VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.

    Science.gov (United States)

    Yu, Bowen; Silva, Claudio T

    2017-01-01

    Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.

  17. Dynamic scene stitching driven by visual cognition model.

    Science.gov (United States)

    Zou, Li-hui; Zhang, Dezheng; Wulamu, Aziguli

    2014-01-01

    Dynamic scene stitching still has a great challenge in maintaining the global key information without missing or deforming if multiple motion interferences exist in the image acquisition system. Object clips, motion blurs, or other synthetic defects easily occur in the final stitching image. In our research work, we proceed from human visual cognitive mechanism and construct a hybrid-saliency-based cognitive model to automatically guide the video volume stitching. The model consists of three elements of different visual stimuli, that is, intensity, edge contour, and scene depth saliencies. Combined with the manifold-based mosaicing framework, dynamic scene stitching is formulated as a cut path optimization problem in a constructed space-time graph. The cutting energy function for column width selections is defined according to the proposed visual cognition model. The optimum cut path can minimize the cognitive saliency difference throughout the whole video volume. The experimental results show that it can effectively avoid synthetic defects caused by different motion interferences and summarize the key contents of the scene without loss. The proposed method gives full play to the role of human visual cognitive mechanism for the stitching. It is of high practical value to environmental surveillance and other applications.

  18. Dynamic Scene Stitching Driven by Visual Cognition Model

    Directory of Open Access Journals (Sweden)

    Li-hui Zou

    2014-01-01

    Full Text Available Dynamic scene stitching still has a great challenge in maintaining the global key information without missing or deforming if multiple motion interferences exist in the image acquisition system. Object clips, motion blurs, or other synthetic defects easily occur in the final stitching image. In our research work, we proceed from human visual cognitive mechanism and construct a hybrid-saliency-based cognitive model to automatically guide the video volume stitching. The model consists of three elements of different visual stimuli, that is, intensity, edge contour, and scene depth saliencies. Combined with the manifold-based mosaicing framework, dynamic scene stitching is formulated as a cut path optimization problem in a constructed space-time graph. The cutting energy function for column width selections is defined according to the proposed visual cognition model. The optimum cut path can minimize the cognitive saliency difference throughout the whole video volume. The experimental results show that it can effectively avoid synthetic defects caused by different motion interferences and summarize the key contents of the scene without loss. The proposed method gives full play to the role of human visual cognitive mechanism for the stitching. It is of high practical value to environmental surveillance and other applications.

  19. Does Visual Performance Influence Head Impact Severity Among High School Football Athletes?

    Science.gov (United States)

    Schmidt, Julianne D; Guskiewicz, Kevin M; Mihalik, Jason P; Blackburn, J Troy; Siegmund, Gunter P; Marshall, Stephen W

    2015-11-01

    To compare the odds of sustaining moderate and severe head impacts, rather than mild, between high school football players with high and low visual performance. Prospective quasi-experimental. Clinical Research Center/On-field. Thirty-seven high school varsity football players. Athletes completed the Nike SPARQ Sensory Station visual assessment before the season. Head impact biomechanics were captured at all practices and games using the Head Impact Telemetry System. Each player was classified as either a high or low performer using a median split for each of the following visual performance measures: visual clarity, contrast sensitivity, depth perception, near-far quickness, target capture, perception span, eye-hand coordination, go/no go, and reaction time. We computed the odds of sustaining moderate and severe head impacts against the reference odds of sustaining mild head impacts across groups of high and low performers for each of the visual performance measures. Players with better near-far quickness had increased odds of sustaining moderate [odds ratios (ORs), 1.27; 95% confidence intervals (CIs), 1.04-1.56] and severe head impacts (OR, 1.45; 95% CI, 1.05-2.01) as measured by Head Impact Technology severity profile. High and low performers were at equal odds on all other measures. Better visual performance did not reduce the odds of sustaining higher magnitude head impacts. Visual performance may play less of a role than expected for protecting against higher magnitude head impacts among high school football players. Further research is needed to determine whether visual performance influences concussion risk. Based on our results, we do not recommend using visual training programs at the high school level for the purpose of reducing the odds of sustaining higher magnitude head impacts.

  20. Image categorization based on spatial visual vocabulary model

    Science.gov (United States)

    Wang, Yuxin; He, Changqin; Guo, He; Feng, Zhen; Jia, Qi

    2010-08-01

    In this paper, we propose an approach to recognize scene categories by means of a novel method named spatial visual vocabulary. Firstly, we hierarchically divide images into sub regions and construct the spatial visual vocabulary by grouping the low-level features collected from every corresponding spatial sub region into a specified number of clusters using k-means algorithm. To recognize the category of a scene, the visual vocabulary distributions of all spatial sub regions are concatenated to form a global feature vector. The classification is obtained using LIBSVM, a support vector machine classifier. Our goal is to find a universal framework which is applicable to various types of features, so two kinds of features are used in the experiments: "V1-like" filters and PACT features. In almost all experimental cases, the proposed model achieves superior results. Source codes are available by email.

  1. On the Efficiency of Image Metrics for Evaluating the Visual Quality of 3D Models.

    Science.gov (United States)

    Lavoue, Guillaume; Larabi, Mohamed Chaker; Vasa, Libor

    2016-08-01

    3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.

  2. Effect of Visual-Spatial Ability on Medical Students' Performance in a Gross Anatomy Course

    Science.gov (United States)

    Lufler, Rebecca S.; Zumwalt, Ann C.; Romney, Carla A.; Hoagland, Todd M.

    2012-01-01

    The ability to mentally manipulate objects in three dimensions is essential to the practice of many clinical medical specialties. The relationship between this type of visual-spatial ability and performance in preclinical courses such as medical gross anatomy is poorly understood. This study determined if visual-spatial ability is associated with…

  3. Crowded task performance in visually impaired children : Comparing magnifier and large print

    NARCIS (Netherlands)

    Huurneman, Bianca; Boonstra, F. Nienke; Verezen, Cornelis A.; Cillessen, Antonius H. N.; van Rens, Ger; Cox, Ralf F. A.

    This study compares the influence of two different types of magnification (magnifier versus large print) on crowded near vision task performance. Fifty-eight visually impaired children aged 4-8 years participated. Participants were divided in two groups, matched on age and near visual acuity (NVA):

  4. Rey Visual Design Learning Test performance correlates with white matter structure.

    Science.gov (United States)

    Begré, Stefan; Kiefer, Claus; von Känel, Roland; Frommer, Angela; Federspiel, Andrea

    2009-04-01

    Studies exploring relation of visual memory to white matter are extensively lacking. The Rey Visual Design Learning Test (RVDLT) is an elementary motion, colour and word independent visual memory test. It avoids a significant contribution from as many additional higher order visual brain functions as possible to visual performance, such as three-dimensional, colour, motion or word-dependent brain operations. Based on previous results, we hypothesised that test performance would be related with white matter of dorsal hippocampal commissure, corpus callosum, posterior cingulate, superior longitudinal fascicle and internal capsule. In 14 healthy subjects, we measured intervoxel coherence (IC) by diffusion tensor imaging as an indication of connectivity and visual memory performance measured by the RVDLT. IC considers the orientation of the adjacent voxels and has a better signal-to-noise ratio than the commonly used fractional anisotropy index. Using voxelwise linear regression analyses of the IC values, we found a significant and direct relationship between 11 clusters and visual memory test performance. The fact that memory performance correlated with white matter structure in left and right dorsal hippocampal commissure, left and right posterior cingulate, right callosal splenium, left and right superior longitudinal fascicle, right medial orbitofrontal region, left anterior cingulate, and left and right anterior limb of internal capsule emphasises our hypothesis. Our observations in healthy subjects suggest that individual differences in brain function related to the performance of a task of higher cognitive demands might partially be associated with structural variation of white matter regions.

  5. From Big Data to Big Displays High-Performance Visualization at Blue Brain

    KAUST Repository

    Eilemann, Stefan

    2017-10-19

    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.

  6. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  7. Correlation-based evaluation of visual performance to reduce the statistical error of visual acuity.

    Science.gov (United States)

    Fülep, Csilla; Kovács, Illés; Kránitz, Kinga; Erdei, Gábor

    2017-07-01

    Ophthalmologists evaluate visual acuity tests by the number of correctly recognized optotypes (usually letters) in the different lines of an eye chart. This probability-based scoring results in significant statistical error that can only be decreased by the time-consuming analysis of a larger number of optotypes. In this paper, we present a new, more precise correlation-based scoring method that takes the degree of misidentification into consideration too, rather than the mere fact of it. According to our experimental results, this new method decreases the uncertainty error by 28% if using the same number of optotypes at a given letter size or requires half the optotype number to produce the same error as that of probability-based scoring.

  8. Effect of prematurity and low birth weight in visual abilities and school performance.

    Science.gov (United States)

    Perez-Roche, T; Altemir, I; Giménez, G; Prieto, E; González, I; Peña-Segura, J L; Castillo, O; Pueyo, V

    2016-12-01

    Prematurity and low birth weight are known risk factors for cognitive and developmental impairments, and school failure. Visual perceptual and visual motor skills seem to be among the most affected cognitive domains in these children. To assess the influence of prematurity and low birth weight in visual cognitive skills and school performance. We performed a prospective cohort study, which included 80 boys and girls in an age range from 5 to 13. Subjects were grouped by gestational age at birth (preterm, school performance in children. Figure-ground skill and visual motor integration were significantly decreased in the preterm birth group, compared with term control subjects (figure-ground: 45.7 vs 66.5, p=0.012; visual motor integration, TVAS: (9.9 vs 11.8, p=0.018), while outcomes of visual memory (29.0 vs 47.7, p=0.012), form constancy (33.3 vs 52.8, p=0.019), figure-ground (37.4 vs 65.6, p=0.001), and visual closure (43.7 vs 62.6 p=0.016) testing were lower in the SGA (vs AGA) group. Visual cognitive difficulties corresponded with worse performance in mathematics (r=0.414, p=0.004) and reading (r=0.343, p=0.018). Specific patterns of visual perceptual and visual motor deficits are displayed by children born preterm or SGA, which hinder mathematics and reading performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble

    Science.gov (United States)

    Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin

    2017-04-01

    Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V

  10. A dual visual-local feedback model of the vergence eye movement system.

    Science.gov (United States)

    Erkelens, Casper J

    2011-09-27

    Pure vergence movements are the eye movements that we make when we change our binocular fixation between targets differing in distance but not in direction relative to the head. Pure vergence is slow and controlled by visual feedback. Saccades are the rapid eye movements that we make between targets differing in direction. Saccades are extremely fast and controlled by a local, non-visual feedback loop. Usually, we change our fixation between targets that differ in both distance and direction. Then, vergence eye movements are combined with saccades. A number of models have been proposed to explain the dynamics of saccade-related vergence movements. The models have in common that visual input is ignored for the duration of the responses. This type of control is realistic for saccades but not for vergence. Here, I present computations performed to investigate if a model using dual visual and local feedback can replace the current models. Simulations and stability analysis lead to a model that computes an estimate of target vergence instead of retinal disparity and uses this signal as the main drive. Further analysis shows that the model describes the dynamics of pure vergence responses over the full physiological range, saccade-related vergence movements, and vergence adaptation. The structure of the model leads to new hypotheses about the control of vergence.

  11. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    Directory of Open Access Journals (Sweden)

    Anthony William Blanchfield

    2014-12-01

    Full Text Available The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effect of these non-conscious visual cues on effort and performance during physical tasks is however unknown. We report two experiments investigating the effect of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces revealed that individuals cycled for significantly longer (178 s, p = .04 when subliminally primed with happy faces. A 2 x 5 (condition x iso-time ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE during the time to exhaustion (TTE test with lower RPE when subjects were subliminally primed with happy faces (p = .04. In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer (399 s, p = .04 TTE in comparison to inaction words (p = .04. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = .03. These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise.

  12. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    Science.gov (United States)

    Blanchfield, Anthony; Hardy, James; Marcora, Samuele

    2014-01-01

    The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014

  13. Functionality and Performance Visualization of the Distributed High Quality Volume Renderer (HVR)

    KAUST Repository

    Shaheen, Sara

    2012-07-01

    Volume rendering systems are designed to provide means to enable scientists and a variety of experts to interactively explore volume data through 3D views of the volume. However, volume rendering techniques are computationally intensive tasks. Moreover, parallel distributed volume rendering systems and multi-threading architectures were suggested as natural solutions to provide an acceptable volume rendering performance for very large volume data sizes, such as Electron Microscopy data (EM). This in turn adds another level of complexity when developing and manipulating volume rendering systems. Given that distributed parallel volume rendering systems are among the most complex systems to develop, trace and debug, it is obvious that traditional debugging tools do not provide enough support. As a consequence, there is a great demand to provide tools that are able to facilitate the manipulation of such systems. This can be achieved by utilizing the power of compute graphics in designing visual representations that reflect how the system works and that visualize the current performance state of the system.The work presented is categorized within the field of software Visualization, where Visualization is used to serve visualizing and understanding various software. In this thesis, a number of visual representations that reflect a number of functionality and performance aspects of the distributed HVR, a high quality volume renderer system that uses various techniques to visualize large volume sizes interactively. This work is provided to visualize different stages of the parallel volume rendering pipeline of HVR. This is along with means of performance analysis through a number of flexible and dynamic visualizations that reflect the current state of the system and enables manipulation of them at runtime. Those visualization are aimed to facilitate debugging, understanding and analyzing the distributed HVR.

  14. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  15. Research on minimum illumination as a function of visual performance

    Energy Technology Data Exchange (ETDEWEB)

    Atmodipoero, R.T.; Pardede, L. [Building Physics and Acoustics Laboratory, Department of Engineering Physics, Institut Teknologi Bandung, Bandung, West Java (Indonesia)

    2004-07-01

    The objective of this research is to find minimum illuminance level such that a visual task (in this case is reading) still can be done properly. The reading object in this research is sentences (written in Bahasa Indonesia) and printed in Times New Roman font on a white A4 paper. Three factors are observed, i.e. font size, luminance contrast (between letters and paper), and reading distance. Measurements were made of the minimum illumination that was needed by subjects, such that they still can read the object properly. The experiment was done in a dark room and the illuminance level on the object from a lamp was adjusted by using a dimmer device. The results of this experiment show that the lowest minimum illuminance of 0.13 lx (for reading object with font size 16, luminance contrast of 0.93, and distance of 60 cm) and the highest of 15.32 lx (for reading object with font size 8, luminance contrast of 0.55, and distance of 100 cm). By using analysis of variance method, it can be shown that reading distance is the most influential factor for the minimum illuminance level, and then followed by dimension and luminance contrast. (author)

  16. The effect of non-linear human visual system components on linear model observers

    Science.gov (United States)

    Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.

    2004-05-01

    Linear model observers have been used successfully to predict human performance in clinically relevant visual tasks for a variety of backgrounds. On the other hand, there has been another family of models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks). These masking models usually include a number of non-linear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The relationship between these two traditions of models has not been extensively investigated in the context of detection in noise. In this paper, we evaluated the effect of including some of these non-linear components into a linear channelized Hotelling observer (CHO), and the associated practical implications for medical image quality evaluation. In particular, we evaluate whether the rank order evaluation of two compression algorithms (JPEG vs. JPEG 2000) is changed by inclusion of the non-linear components. The results show: a) First that the simpler linear CHO model observer outperforms CHO model with the nonlinear components investigated. b) The rank order of model observer performance for the compression algorithms did not vary when the non-linear components were included. For the present task, the results suggest that the addition of the physiologically based channel non-linearities to a channelized Hotelling might add complexity to the model observers without great impact on medical image quality evaluation.

  17. Modelling the shape hierarchy for visually guided grasping

    Directory of Open Access Journals (Sweden)

    Omid eRezai

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP. The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e. distance from the observer to the object surface. We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. However (in contrast with superquadrics further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

  18. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  19. Differential learning and memory performance in OEF/OIF veterans for verbal and visual material.

    Science.gov (United States)

    Sozda, Christopher N; Muir, James J; Springer, Utaka S; Partovi, Diana; Cole, Michael A

    2014-05-01

    Memory complaints are particularly salient among veterans who experience combat-related mild traumatic brain injuries and/or trauma exposure, and represent a primary barrier to successful societal reintegration and everyday functioning. Anecdotally within clinical practice, verbal learning and memory performance frequently appears differentially reduced versus visual learning and memory scores. We sought to empirically investigate the robustness of a verbal versus visual learning and memory discrepancy and to explore potential mechanisms for a verbal/visual performance split. Participants consisted of 103 veterans with reported history of mild traumatic brain injuries returning home from U.S. military Operations Enduring Freedom and Iraqi Freedom referred for outpatient neuropsychological evaluation. Findings indicate that visual learning and memory abilities were largely intact while verbal learning and memory performance was significantly reduced in comparison, residing at approximately 1.1 SD below the mean for verbal learning and approximately 1.4 SD below the mean for verbal memory. This difference was not observed in verbal versus visual fluency performance, nor was it associated with estimated premorbid verbal abilities or traumatic brain injury history. In our sample, symptoms of depression, but not posttraumatic stress disorder, were significantly associated with reduced composite verbal learning and memory performance. Verbal learning and memory performance may benefit from targeted treatment of depressive symptomatology. Also, because visual learning and memory functions may remain intact, these might be emphasized when applying neurocognitive rehabilitation interventions to compensate for observed verbal learning and memory difficulties.

  20. Administration of dehydroepiandrosterone (DHEA) enhances visual-spatial performance in postmenopausal women.

    Science.gov (United States)

    Stangl, Bethany; Hirshman, Elliot; Verbalis, Joseph

    2011-10-01

    The current article examines the effect of administering dehydroepiandrosterone (DHEA) on visual-spatial performance in postmenopausal women (N = 24, ages 55-80). The concurrent reduction of serum DHEA levels and visual-spatial performance in this population, coupled with the documented effects of DHEA's androgenic metabolites on visual-spatial performance, suggests that DHEA administration may enhance visual-spatial performance. The current experiment used a double-blind, placebo-controlled crossover design in which 50 mg of oral DHEA was administered daily in the drug condition to explore this hypothesis. Performance on the Mental Rotation, Subject-Ordered Pointing, Fragmented Picture Identification, Perceptual Identification, Same-Different Judgment, and Visual Search tasks and serum levels of DHEA, DHEAS, testosterone, estrone, and cortisol were measured in the DHEA and placebo conditions. In contrast to prior experiments using the current methodology that did not demonstrate effects of DHEA administration on episodic and short-term memory tasks, the current experiment demonstrated large beneficial effects of DHEA administration on Mental Rotation, Subject-Ordered Pointing, Fragmented Picture Identification, Perceptual Identification, and Same-Different Judgment. Moreover, DHEA administration enhanced serum levels of DHEA, DHEAS, testosterone, and estrone, and regression analyses demonstrated that levels of DHEA and its metabolites were positively related to cognitive performance on the visual-spatial tasks in the DHEA condition.

  1. Administration of Dehydroepiandrosterone (DHEA) Enhances Visual-Spatial Performance in Post-Menopausal Women

    Science.gov (United States)

    Stangl, Bethany; Hirshman, Elliot; Verbalis, Joseph

    2013-01-01

    The current paper examines the effect of administering Dehydroepiandrosterone (DHEA) on visual-spatial performance in post-menopausal women (N=24, ages 55-80). The concurrent reduction of serum DHEA levels and visual-spatial performance in this population, coupled with the documented effects of DHEA’s androgenic metabolites on visual-spatial performance, suggest that DHEA administration may enhance visual-spatial performance. The current experiment used a double-blind placebo-controlled crossover design in which 50 mg of oral DHEA was administered daily in the drug condition to explore this hypothesis. Performance on the Mental Rotation, Subject-Ordered Pointing, Fragmented Picture Identification, Perceptual Identification, Same-Different Judgment, and Visual Search tasks and serum levels of DHEA, DHEAS, testosterone, estrone and cortisol were measured in the DHEA and placebo conditions. In contrast to prior experiments using the current methodology that did not demonstrate effects of DHEA administration on episodic and short-term memory tasks, the current experiment demonstrated large beneficial effects of DHEA administration on Mental Rotation, Subject-Ordered Pointing, Fragmented Picture Identification, Perceptual Identification and Same-Different Judgment. Moreover, DHEA administration enhanced serum levels of DHEA, DHEAS, testosterone and estrone, and regression analyses demonstrated that levels of DHEA and its metabolites were positively related to cognitive performance on the visual-spatial tasks in the DHEA condition PMID:21942436

  2. Eksplorasi Pose dalam Pemotretan Model Melalui Kajian Visual Relief Karmawibhangga

    Directory of Open Access Journals (Sweden)

    Noor Latif CM

    2015-10-01

    Full Text Available Karmawibhangga Relief panel located at the foot of Borobudur is a relic of the visual artifacts that contains fragments of past life with a very high historical value. 160 Karmawibhangga panels tell the reality of people's lives at the time wrapped with a moral message plight. The relief provides a lot of visual references to be excavated and reconstructed again for the benefit of the creative industries today. This research digged one small part of the masterpieces of the past through photography. Understanding visual artists who create these reliefs will be beauty in the show gestures in building a very interesting story to be re-examined. Visual communication through gestures in relief Karmawibhangga enables new assumptions about body language dialect differences to current conditions. Through model genre photography, it is very useful in connection with the development of local nuanced scientific photography. Efforts to develop the traditions and culture through new media are expected to be creative commodity with a very strong product differentiation. 

  3. Modeling and Visualization of Human Activities for Multicamera Networks

    Directory of Open Access Journals (Sweden)

    Aswin C. Sankaranarayanan

    2009-01-01

    Full Text Available Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.

  4. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  5. Interactive Visualizations of Complex Seismic Data and Models

    Science.gov (United States)

    Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.

    2016-12-01

    The volume and complexity of seismic data and models have increased dramatically thanks to dense seismic station deployments and advances in data modeling and processing. Seismic observations such as receiver functions and surface-wave dispersion are multidimensional: latitude, longitude, time, amplitude and latitude, longitude, period, and velocity. Three-dimensional seismic velocity models are characterized with three spatial dimensions and one additional dimension for the speed. In these circumstances, exploring the data and models and assessing the data fits is a challenge. A few professional packages are available to visualize these complex data and models. However, most of these packages rely on expensive commercial software or require a substantial time investment to master, and even when that effort is complete, communicating the results to others remains a problem. A traditional approach during the model interpretation stage is to examine data fits and model features using a large number of static displays. Publications include a few key slices or cross-sections of these high-dimensional data, but this prevents others from directly exploring the model and corresponding data fits. In this presentation, we share interactive visualization examples of complex seismic data and models that are based on open-source tools and are easy to implement. Model and data are linked in an intuitive and informative web-browser based display that can be used to explore the model and the features in the data that influence various aspects of the model. We encode the model and data into HTML files and present high-dimensional information using two approaches. The first uses a Python package to pack both data and interactive plots in a single file. The second approach uses JavaScript, CSS, and HTML to build a dynamic webpage for seismic data visualization. The tools have proven useful and led to deeper insight into 3D seismic models and the data that were used to construct them

  6. caBIG™ VISDA: Modeling, visualization, and discovery for cluster analysis of genomic data

    Directory of Open Access Journals (Sweden)

    Xuan Jianhua

    2008-09-01

    Full Text Available Abstract Background The main limitations of most existing clustering methods used in genomic data analysis include heuristic or random algorithm initialization, the potential of finding poor local optima, the lack of cluster number detection, an inability to incorporate prior/expert knowledge, black-box and non-adaptive designs, in addition to the curse of dimensionality and the discernment of uninformative, uninteresting cluster structure associated with confounding variables. Results In an effort to partially address these limitations, we develop the VIsual Statistical Data Analyzer (VISDA for cluster modeling, visualization, and discovery in genomic data. VISDA performs progressive, coarse-to-fine (divisive hierarchical clustering and visualization, supported by hierarchical mixture modeling, supervised/unsupervised informative gene selection, supervised/unsupervised data visualization, and user/prior knowledge guidance, to discover hidden clusters within complex, high-dimensional genomic data. The hierarchical visualization and clustering scheme of VISDA uses multiple local visualization subspaces (one at each node of the hierarchy and consequent subspace data modeling to reveal both global and local cluster structures in a "divide and conquer" scenario. Multiple projection methods, each sensitive to a distinct type of clustering tendency, are used for data visualization, which increases the likelihood that cluster structures of interest are revealed. Initialization of the full dimensional model is based on first learning models with user/prior knowledge guidance on data projected into the low-dimensional visualization spaces. Model order selection for the high dimensional data is accomplished by Bayesian theoretic criteria and user justification applied via the hierarchy of low-dimensional visualization subspaces. Based on its complementary building blocks and flexible functionality, VISDA is generally applicable for gene clustering, sample

  7. caBIG VISDA: modeling, visualization, and discovery for cluster analysis of genomic data.

    Science.gov (United States)

    Zhu, Yitan; Li, Huai; Miller, David J; Wang, Zuyi; Xuan, Jianhua; Clarke, Robert; Hoffman, Eric P; Wang, Yue

    2008-09-18

    The main limitations of most existing clustering methods used in genomic data analysis include heuristic or random algorithm initialization, the potential of finding poor local optima, the lack of cluster number detection, an inability to incorporate prior/expert knowledge, black-box and non-adaptive designs, in addition to the curse of dimensionality and the discernment of uninformative, uninteresting cluster structure associated with confounding variables. In an effort to partially address these limitations, we develop the VIsual Statistical Data Analyzer (VISDA) for cluster modeling, visualization, and discovery in genomic data. VISDA performs progressive, coarse-to-fine (divisive) hierarchical clustering and visualization, supported by hierarchical mixture modeling, supervised/unsupervised informative gene selection, supervised/unsupervised data visualization, and user/prior knowledge guidance, to discover hidden clusters within complex, high-dimensional genomic data. The hierarchical visualization and clustering scheme of VISDA uses multiple local visualization subspaces (one at each node of the hierarchy) and consequent subspace data modeling to reveal both global and local cluster structures in a "divide and conquer" scenario. Multiple projection methods, each sensitive to a distinct type of clustering tendency, are used for data visualization, which increases the likelihood that cluster structures of interest are revealed. Initialization of the full dimensional model is based on first learning models with user/prior knowledge guidance on data projected into the low-dimensional visualization spaces. Model order selection for the high dimensional data is accomplished by Bayesian theoretic criteria and user justification applied via the hierarchy of low-dimensional visualization subspaces. Based on its complementary building blocks and flexible functionality, VISDA is generally applicable for gene clustering, sample clustering, and phenotype clustering

  8. caBIG™ VISDA: Modeling, visualization, and discovery for cluster analysis of genomic data

    Science.gov (United States)

    Zhu, Yitan; Li, Huai; Miller, David J; Wang, Zuyi; Xuan, Jianhua; Clarke, Robert; Hoffman, Eric P; Wang, Yue

    2008-01-01

    Background The main limitations of most existing clustering methods used in genomic data analysis include heuristic or random algorithm initialization, the potential of finding poor local optima, the lack of cluster number detection, an inability to incorporate prior/expert knowledge, black-box and non-adaptive designs, in addition to the curse of dimensionality and the discernment of uninformative, uninteresting cluster structure associated with confounding variables. Results In an effort to partially address these limitations, we develop the VIsual Statistical Data Analyzer (VISDA) for cluster modeling, visualization, and discovery in genomic data. VISDA performs progressive, coarse-to-fine (divisive) hierarchical clustering and visualization, supported by hierarchical mixture modeling, supervised/unsupervised informative gene selection, supervised/unsupervised data visualization, and user/prior knowledge guidance, to discover hidden clusters within complex, high-dimensional genomic data. The hierarchical visualization and clustering scheme of VISDA uses multiple local visualization subspaces (one at each node of the hierarchy) and consequent subspace data modeling to reveal both global and local cluster structures in a "divide and conquer" scenario. Multiple projection methods, each sensitive to a distinct type of clustering tendency, are used for data visualization, which increases the likelihood that cluster structures of interest are revealed. Initialization of the full dimensional model is based on first learning models with user/prior knowledge guidance on data projected into the low-dimensional visualization spaces. Model order selection for the high dimensional data is accomplished by Bayesian theoretic criteria and user justification applied via the hierarchy of low-dimensional visualization subspaces. Based on its complementary building blocks and flexible functionality, VISDA is generally applicable for gene clustering, sample clustering, and phenotype

  9. Visualizing and modelling complex rockfall slopes using game-engine hosted models

    Science.gov (United States)

    Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob

    2015-04-01

    Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds

  10. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Helbren, Emma; Taylor, Stuart A. [University College London, Centre for Medical Imaging, London (United Kingdom); Fanshawe, Thomas R.; Mallett, Susan [University of Oxford, Nuffield Department of Primary Care Health Sciences, Oxford (United Kingdom); Phillips, Peter [University of Cumbria, Health and Medical Sciences Group, Lancaster (United Kingdom); Boone, Darren [Colchester Hospital University NHS Foundation Trust and Anglia University, Colchester (United Kingdom); Gale, Alastair [Loughborough University, Applied Vision Research Centre, Loughborough (United Kingdom); Altman, Douglas G. [University of Oxford, Centre for Statistics in Medicine, Oxford (United Kingdom); Manning, David [Lancaster University, Lancaster Medical School, Faculty of Health and Medicine, Lancaster (United Kingdom); Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College Hospital, Gastrointestinal Radiology, University College London, Centre for Medical Imaging, Podium Level 2, London, NW1 2BU (United Kingdom)

    2015-06-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  11. Visual Narrative Research Methods as Performance in Industrial Design Education

    Science.gov (United States)

    Campbell, Laurel H.; McDonagh, Deana

    2009-01-01

    This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…

  12. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  13. ARTISTIC VISUALIZATION OF TRAJECTORY DATA USING CLOUD MODEL

    Directory of Open Access Journals (Sweden)

    T. Wu

    2017-09-01

    Full Text Available Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  14. Artistic Visualization of Trajectory Data Using Cloud Model

    Science.gov (United States)

    Wu, T.; Zhou, Y.; Zhang, L.

    2017-09-01

    Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  15. Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance.

    Science.gov (United States)

    Yang, Shun-nan; Tai, Yu-chi; Sheedy, James E; Kinoshita, Beth; Lampa, Matthew; Kern, Jami R

    2012-09-01

    To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy. Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink. LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of

  16. Does mobility performance of visually impaired adults improve immediately after orientation and mobility training?

    Science.gov (United States)

    Soong, G P; Lovie-Kitchin, J E; Brown, B

    2001-09-01

    Previous studies that have attempted to determine the effect of orientation and mobility training on mobility performance of visually impaired adults have had a number of limitations. With the inclusion of a control group of subjects, this study investigated the effect of orientation and mobility training on mobility performance of a group of visually impaired adults. Vision was measured binocularly as high- and low-contrast visual acuity, letter and edge contrast sensitivity, and Humphrey kinetic visual fields. The subjects' mobility performance was assessed as percentage preferred walking speed (PPWS) and error score before and after mobility training. Orientation and mobility training did not enhance mobility performance compared with the control group, who did not receive training, when performance was measured immediately after training. PPWS improved for both groups with short-term practice only, but there was no improvement in error score due to either practice or training. There was no immediate improvement in mobility performance of visually impaired adults after orientation and mobility training. Familiarity with the route may play an important role in measured improvement of mobility performance after orientation and mobility training.

  17. SeiVis: An interactive visual subsurface modeling application

    KAUST Repository

    Hollt, Thomas

    2012-12-01

    The most important resources to fulfill today’s energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches. © 2012 IEEE.

  18. Extension of a human visual system model for display simulation

    Science.gov (United States)

    Marchessoux, Cédric; Rombaut, Alexis; Kimpe, Tom; Vermeulen, Brecht; Demeester, Piet

    2008-02-01

    In the context of medical display validation, a simulation chain has been developed to facilitate display design and image quality validation. One important part is the human visual observer model to quantify the quality perception of the simulated images. Since several years, multiple research groups are modeling the various aspects of human perception to integrate them in a complete Human Visual System (HVS) and developing visible image difference metrics. In our framework, the JNDmetrix is used. It reflects the human subjective assessment of images or video fidelity. Nevertheless, the system is limited and not suitable for our accurate simulations. There is a limitation to RGB 8 bits integer images and the model takes into account display parameters like gamma, black offset, ambient light... It needs to be extended. The solutions proposed to extend the HVS model are: precision enhancement to overcome the 8 bit limit, color space conversion between XYZ and RGB and adaptation to the display parameters. The preprocessing does not introduce any kind of perceived distortion caused for example by precision enhancement. With this extension the model is used in a daily basis in the display simulation chain.

  19. 3D shape modeling by integration visual and tactile cues

    Science.gov (United States)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2015-10-01

    With the progress in CAD (Computer Aided Design) systems, many mechanical components can be designed efficiently with high precision. But, such a system is unfit for some organic shapes, for example, a toy. In this paper, an easy way to dealing with such shapes is presented, combing visual perception with tangible interaction. The method is divided into three phases: two tangible interaction phases and one visual reconstruction. In the first tangible phase, a clay model is used to represent the raw shape, and the designer can change the shape intuitively with his hands. Then the raw shape is scanned into a digital volume model through a low cost vision system. In the last tangible phase, a desktop haptic device from SensAble is used to refine the scanned volume model and convert it into a surface model. A physical clay model and a virtual clay mode are all used in this method to deal with the main shape and the details respectively, and the vision system is used to bridge the two tangible phases. The vision reconstruction system is only made of a camera to acquire raw shape through shape from silhouettes method. All of the systems are installed on a single desktop, make it convenient for designers. The vision system details and a design example are presented in the papers.

  20. An Integrated Biomechanical Model for Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2012-01-01

    When gravitational unloading occurs upon entry to space, astronauts experience a major shift in the distribution of their bodily fluids, with a net headward movement. Measurements have shown that intraocular pressure spikes, and there is a strong suspicion that intracranial pressure also rises. Some astronauts in both short- and long-duration spaceflight develop visual acuity changes, which may or may not reverse upon return to earth gravity. To date, of the 36 U.S. astronauts who have participated in long-duration space missions on the International Space Station, 15 crew members have developed minor to severe visual decrements and anatomical changes. These ophthalmic changes include hyperopic shift, optic nerve distension, optic disc edema, globe flattening, choroidal folds, and elevated cerebrospinal fluid pressure. In order to understand the physical mechanisms behind these phenomena, NASA is developing an integrated model that appropriately captures whole-body fluids transport through lumped-parameter models for the cerebrospinal and cardiovascular systems. This data feeds into a finite element model for the ocular globe and retrobulbar subarachnoid space through time-dependent boundary conditions. Although tissue models and finite element representations of the corneo-scleral shell, retina, choroid and optic nerve head have been integrated to study pathological conditions such as glaucoma, the retrobulbar subarachnoid space behind the eye has received much less attention. This presentation will describe the development and scientific foundation of our holistic model.

  1. Markers of preparatory attention predict visual short-term memory performance.

    Science.gov (United States)

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  2. Visual misperception in aviation: glide path performance in a black hole environment.

    Science.gov (United States)

    Gibb, Randy; Schvaneveldt, Roger; Gray, Rob

    2008-08-01

    We sought to improve understanding of visual perception in aviation to mitigate mishaps in approaches to landing. Research has attempted to identify the most salient visual cues for glide path performance in impoverished visual conditions. Numerous aviation accidents caused by glide path overestimation (GPO) have occurred when a low glide path was induced by a black hole illusion (BHI) in featureless terrain during night approaches. Twenty pilots flew simulated approaches under various visual cues of random terrain objects and approach lighting system (ALS) configurations. Performance was assessed relative to the desired 3 degrees glide path in terms of precision, bias, and stability. With the high-ratio (long, narrow) runway, the overall performance between 8.3 and 0.9 km from the runway depicted a concave approach shape found in BHI mishaps. The addition of random terrain objects failed to improve glide path performance, and an ALS commonly used at airports induced GPO and the resulting low glide path. The worst performance, however, resulted from a combination ALS consisting of both side and approach lights. Surprisingly, novice pilots flew more stable approaches than did experienced pilots. Low, unsafe approaches occur frequently in conditions with limited global and local visual cues. Approach lights lateral of the runway may counter the bias of the BHI. The variability suggested a proactive, cue-seeking behavior among experienced pilots as compared with novice pilots. Visual spatial disorientation training in flight simulators should be used to demonstrate visual misperceptions in black hole environments and reduce pilots' confidence in their limited visual capabilities.

  3. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort

    Directory of Open Access Journals (Sweden)

    Nesli Avgan

    2017-03-01

    Full Text Available Brain-derived neurotrophic factor (BDNF is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265 and long-term visual memory (p-value = 0.003 in a small cohort (n = 181 comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale—Fourth Edition subtests Visual Reproduction I and II (VR I and II. VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism (p-value = 0.006 that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF, and its anti-sense transcript BDNF-AS, in long-term visual memory performance.

  4. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort.

    Science.gov (United States)

    Avgan, Nesli; Sutherland, Heidi G; Spriggens, Lauren K; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M; Shum, David H K; Griffiths, Lyn R

    2017-03-17

    Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory (p-value = 0.003) in a small cohort (n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale-Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism (p-value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF, and its anti-sense transcript BDNF-AS, in long-term visual memory performance.

  5. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    Science.gov (United States)

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  6. Measuring the effect of the rainfall on the windshield in terms of visual performance.

    Science.gov (United States)

    Bernardin, Frédéric; Bremond, Roland; Ledoux, Vincent; Pinto, Maria; Lemonnier, Sophie; Cavallo, Viola; Colomb, Michèle

    2014-02-01

    Driving through rain results in reduced visual performance, and car designers have proposed countermeasures in order to reduce the impact of rain on driving performance. In this paper, we propose a methodology dedicated to the quantitative estimation of the loss of visual performance due to the falling rain. We have considered the rain falling on the windshield as the main factor which reduces visual performance in driving. A laboratory experiment was conducted with 40 participants. The reduction of visual performance through rain was considered with respect to two driving tasks: the detection of an object on the road (contrast threshold) and reading a road sign. This experiment was conducted in a laboratory under controlled artificial rain. Two levels of rain intensity were compared, as well as two wiper conditions (new and worn), while the reference condition was without rain. The reference driving situation was night driving. Effects of both the rain level and the wipers characteristics were found, which validates the proposed methodology for the quantitative estimation of rain countermeasures in terms of visual performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Visualization of Distributed Data Structures for High Performance Fortran-Like Languages

    Directory of Open Access Journals (Sweden)

    Rainer Koppler

    1997-01-01

    Full Text Available This article motivates the usage of graphics and visualization for efficient utilization of High Performance Fortran's (HPF's data distribution facilities. It proposes a graphical toolkit consisting of exploratory and estimation tools which allow the programmer to navigate through complex distributions and to obtain graphical ratings with respect to load distribution and communication. The toolkit has been implemented in a mapping design and visualization tool which is coupled with a compilation system for the HPF predecessor Vienna Fortran. Since this language covers a superset of HPF's facilities, the tool may also be used for visualization of HPF data structures.

  8. Towards computer-based perception by modeling visual perception : A probalistic theory

    NARCIS (Netherlands)

    Ciftcioglu, O.; Bittermann, M.; Sariyildiz, S.

    2006-01-01

    Studies on computer-based perception by vision modelling are described. The visual perception is mathematically modelled where the model receives and interprets visual data from the environment. The perception is defined in probabilistic terms so that it is in the same way quantified. Human visual

  9. Do prior knowledge, personality and visual perceptual ability predict student performance in microscopic pathology?

    Science.gov (United States)

    Helle, Laura; Nivala, Markus; Kronqvist, Pauliina; Ericsson, K Anders; Lehtinen, Erno

    2010-06-01

    OBJECTIVES There has been long-standing controversy regarding aptitude testing and selection for medical education. Visual perception is considered particularly important for detecting signs of disease as part of diagnostic procedures in, for example, microscopic pathology, radiology and dermatology and as a component of perceptual motor skills in medical procedures such as surgery. In 1968 the Perceptual Ability Test (PAT) was introduced in dental education. The aim of the present pilot study was to explore possible predictors of performance in diagnostic classification based on microscopic observation in the context of an undergraduate pathology course. METHODS A pre- and post-test of diagnostic classification performance, test of visual perceptual skill (Test of Visual Perceptual Skills, 3rd edition [TVPS-3]) and a self-report instrument of personality (Big Five Personality Inventory) were administered. In addition, data on academic performance (performance in histology and cell biology, a compulsory course taken the previous year, in addition to performance on the microscopy examination and final examination) were collected. RESULTS The results indicated that one personality factor (Conscientiousness) and one element of visual perceptual ability (spatial relationship awareness) predicted performance on the pre-test. The only factor to predict performance on the post-test was performance on the pre-test. Similarly, the microscopy examination score was predicted by the pre-test score, in addition to the histology and cell biology grade. The course examination score was predicted by two personality factors (Conscientiousness and lack of Openness) and the histology and cell biology grade. CONCLUSIONS Visual spatial ability may be related to performance in the initial phase of training in microscopic pathology. However, from a practical point of view, medical students are able to learn basic microscopic pathology using worked-out examples, independently of measures

  10. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation

    KAUST Repository

    Abdellah, Marwan

    2017-02-15

    Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.

  11. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation.

    Science.gov (United States)

    Abdellah, Marwan; Bilgili, Ahmet; Eilemann, Stefan; Shillcock, Julian; Markram, Henry; Schürmann, Felix

    2017-02-15

    We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.

  12. Data harmonization and model performance

    Science.gov (United States)

    The Joint Committee on Urban Storm Drainage of the International Association for Hydraulic Research (IAHR) and International Association on Water Pollution Research and Control (IAWPRC) was formed in 1982. The current committee members are (no more than two from a country): B. C. Yen, Chairman (USA); P. Harremoes, Vice Chairman (Denmark); R. K. Price, Secretary (UK); P. J. Colyer (UK), M. Desbordes (France), W. C. Huber (USA), K. Krauth (FRG), A. Sjoberg (Sweden), and T. Sueishi (Japan).The IAHR/IAWPRC Joint Committee is forming a Task Group on Data Harmonization and Model Performance. One objective is to promote international urban drainage data harmonization for easy data and information exchange. Another objective is to publicize available models and data internationally. Comments and suggestions concerning the formation and charge of the Task Group are welcome and should be sent to: B. C. Yen, Dept. of Civil Engineering, Univ. of Illinois, 208 N. Romine St., Urbana, IL 61801.

  13. Anxiety, arousal and visual attention: a mechanistic account of performance variability.

    Science.gov (United States)

    Janelle, Christopher M

    2002-03-01

    Despite extensive research devoted to determining the nature of the relationship between stress and performance, there has been little systematic examination of the mechanisms underlying this relationship. Recently, researchers have begun to empirically address the attentional mechanisms underlying theoretical accounts of how stress, anxiety and arousal influence performance. Given the critical role of visual attention to sport expertise, this paper focuses primarily on literature dealing with how visual cues are differentially identified and processed when performers are anxious. Emerging evidence indicates that gaze behaviour tendencies are reliably altered when performers are anxious, leading to inefficient and often ineffective search strategies. Alterations of these visual search indices are addressed in the context of both self-paced and externally paced sports events. Recommendations concerning the utility of perceptual training programmes and how these training programmes might be used as anxiety regulation interventions are discussed. The theoretical implications and directions for future research are also addressed.

  14. Selecting the optimal healthcare centers with a modified P-median model: a visual analytic perspective.

    Science.gov (United States)

    Jia, Tao; Tao, Hongbing; Qin, Kun; Wang, Yulong; Liu, Chengkun; Gao, Qili

    2014-10-22

    In a conventional P-median model, demanding points are likely assigned to the closest supplying facilities, but this method exhibits evident limitations in real cases. This paper proposed a modified P-median model in which exact and approximate strategies are used. The first strategy aims to enumerate all of the possible combinations of P facilities, and the second strategy adopts simulated annealing to allocate resources considering capacity constraint and spatial compactness constraint. These strategies allow us to choose optimal locations by applying visual analytics, which is rarely employed in location allocation planning. This model is applied to a case study in Henan Province, China, where three optimal healthcare centers are selected from candidate cities. First, the weighting factor in spatial compactness constraint is visually evaluated to obtain a plausible spatial pattern. Second, three optimal healthcare centers, namely, Zhengzhou, Xinxiang, and Nanyang, are identified in a hybrid transportation network by performing visual analytics. Third, alternative healthcare centers are obtained in a road network and compared with the above solution to understand the impacts of transportation network types. The optimal healthcare centers are visually detected by employing an improved P-median model, which considers both geographic accessibility and service quality. The optimal solutions are obtained in two transportation networks, which suggest high-speed railways and highways play a significant role respectively.

  15. Scientific Visualization & Modeling for Earth Systems Science Education

    Science.gov (United States)

    Chaudhury, S. Raj; Rodriguez, Waldo J.

    2003-01-01

    Providing research experiences for undergraduate students in Earth Systems Science (ESS) poses several challenges at smaller academic institutions that might lack dedicated resources for this area of study. This paper describes the development of an innovative model that involves students with majors in diverse scientific disciplines in authentic ESS research. In studying global climate change, experts typically use scientific visualization techniques applied to remote sensing data collected by satellites. In particular, many problems related to environmental phenomena can be quantitatively addressed by investigations based on datasets related to the scientific endeavours such as the Earth Radiation Budget Experiment (ERBE). Working with data products stored at NASA's Distributed Active Archive Centers, visualization software specifically designed for students and an advanced, immersive Virtual Reality (VR) environment, students engage in guided research projects during a structured 6-week summer program. Over the 5-year span, this program has afforded the opportunity for students majoring in biology, chemistry, mathematics, computer science, physics, engineering and science education to work collaboratively in teams on research projects that emphasize the use of scientific visualization in studying the environment. Recently, a hands-on component has been added through science student partnerships with school-teachers in data collection and reporting for the GLOBE Program (GLobal Observations to Benefit the Environment).

  16. Ocupational performance and quality of life: interrelationships in daily life of visual impaired individuals

    Directory of Open Access Journals (Sweden)

    Paula Becker

    2015-12-01

    Full Text Available ABSTRACT Objective: Identify levels of self-perception of occupational performance and quality of life of individuals with visual impairment and subsequent analysis of the interrelationship between the indices found. Methods: Descriptive cross-sectional survey with people with visual disabilities enrolled in visual rehabilitation program. COPM was applied to measure the self-perception of occupational performance, the SF-36 for quality of life measurement and a socio-demographic questionnaire to describe personal characteristics. Results. Twentythree subjects were included in the sample: 74% with low vision, 52.2% were female and mean age of 46.7 years. The self-perception of performance and emotional aspects domains of participants with low vision were better than those with blindness. The greater the time of visual impairment, worse was the self-perception of pain. The vitality domain showed statistical significant relationship with the domains general health, performance and satisfaction as well as the mental health domain were related to general health, pain, performance and vitality. Conclusion: The best were the mental health index, the better were the evaluations of physical, functional and social areas, a fact that indicates the importance of considering mental health in visual rehabilitation programs. The COPM and the SF-36 address the issue of functionality in different ways and their results are not compatible.

  17. The effects of LCD anisotropy on the visual performance of users of different ages.

    Science.gov (United States)

    Oetjen, Sophie; Ziefle, Martina

    2007-08-01

    The present study examined the visual discrimination speed and accuracy while using an LCD and a CRT display. LCDs have ergonomic advantages, but their main disadvantage is that they provide inconsistent photometric measures depending on the viewing angle (anisotropy). Independent variables were screen type (LCD and CRT), viewing angle (0 degrees, 11 degrees, 41 degrees, 50 degrees, and 56 degrees) and user's age (teenagers, young adults, and middle-aged adults). Dependent variables were speed and accuracy in a visual discrimination task and user's ratings. The results corroborated the negative impact of LCD anisotropy. Visual discrimination times were by 7.6% slower when an LCD was used instead of a CRT. Performance differences increased with increasing viewing angle for both screens, but performance decrements were larger for the LCD. Young adults showed the best visual performance, as compared with teenagers and middle-aged adults. Effects of anisotropy were found for all age groups, although the performance of middle-aged adults was affected more when extended viewing angles were adopted. LCD anisotropy is a limiting factor for visual performance, especially in work settings where fast and accurate reactions are necessary. The outcomes of this research allow ergonomic guidelines for electronic reading.

  18. Motor skill performance of school-age children with visual impairments

    NARCIS (Netherlands)

    Houwen, Suzanne

    2008-01-01

    This thesis focuses on the motor skill performance of school-age children with visual impairments (VI). Children with VI are at risk of poor motor skill performance, as vision guides and controls the acquisition, differentiation, and automatization of motor skills. Yet though the presence or absence

  19. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    Science.gov (United States)

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Model-based analysis of patterned motion processing in mouse primary visual cortex

    Directory of Open Access Journals (Sweden)

    Dylan Richard Muir

    2015-08-01

    Full Text Available Neurons in sensory areas of neocortex show responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1 of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found large a proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes takes place already at the level of primary visual cortex. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations

  1. An open science approach to modeling and visualizing ...

    Science.gov (United States)

    It is expected that cyanobacteria blooms will increase in frequency, duration, and severity as inputs of nutrients increase and the impacts of climate change are realized. Partly in response to this, federal, state, and local entities have ramped up efforts to better understand blooms which has resulted in new life for old datasets, new monitoring programs, and novel uses for non-traditional sources of data. To fully benefit from these datasets, it is also imperative that the full body of work including data, code, and manuscripts be openly available (i.e., open science). This presentation will provide several examples of our work which occurs at the intersection of open science and research on cyanobacetria blooms in lakes and ponds. In particular we will discuss 1) why open science is particularly important for environmental human health issues; 2) the lakemorpho and elevatr R packages and how we use those to model lake morphometry; 3) Shiny server applications to visualize data collected as part of the Cyanobacteria Monitoring Collaborative; and 4) distribution of our research and models via open access publications and as R packages on GitHub. Modelling and visualizing information on cyanobacteria blooms is important as it provides estimates of the extent of potential problems associated with these blooms. Furthermore, conducting this work in the open allows others to access our code, data, and results. In turn, this allows for a greater impact because the

  2. A review of visual MODFLOW applications in groundwater modelling

    Science.gov (United States)

    Hariharan, V.; Shankar, M. Uma

    2017-11-01

    Visual MODLOW is a Graphical User Interface for the USGS MODFLOW. It is a commercial software that is popular among the hydrogeologists for its user-friendly features. The software is mainly used for Groundwater flow and contaminant transport models under different conditions. This article is intended to review the versatility of its applications in groundwater modelling for the last 22 years. Agriculture, airfields, constructed wetlands, climate change, drought studies, Environmental Impact Assessment (EIA), landfills, mining operations, river and flood plain monitoring, salt water intrusion, soil profile surveys, watershed analyses, etc., are the areas where the software has been reportedly used till the current date. The review will provide a clarity on the scope of the software in groundwater modelling and research.

  3. Neural correlates of delayed visual-motor performance in children treated for brain tumours.

    Science.gov (United States)

    Dockstader, Colleen; Gaetz, William; Bouffet, Eric; Tabori, Uri; Wang, Frank; Bostan, Stefan R; Laughlin, Suzanne; Mabbott, Donald J

    2013-09-01

    Both structural and functional neural integrity is critical for healthy cognitive function and performance. Across studies, it is evident that children who are affected by neurological insult commonly demonstrate impaired cognitive abilities. Children treated with cranial radiation for brain tumours suffer substantial structural damage and exhibit a particularly high correlation between the degree of neural injury and cognitive deficits. However the pathophysiology underlying impaired cognitive performance in this population, and many other paediatric populations affected by neurological injury or disease, is unknown. We wished to investigate the characteristics of neuronal function during visual-motor task performance in a group of children who were treated with cranial radiation for brain tumours. We used Magnetoencephalography to investigate neural function during visual-motor reaction time (RT) task performance in 15 children treated with cranial radiation for Posterior Fossa malignant brain tumours and 17 healthy controls. We found that, relative to controls, the patient group showed: 1) delayed latencies for neural activation in both visual and motor cortices; 2) muted motor responses in the alpha (8-12Hz) and beta (13-29Hz) bandwidths, and 3) potentiated visual and motor responses in the gamma (30-100Hz) bandwidth. Collectively these observations indicate impaired neural processing during visual-motor RT performance in this population and that delays in the speed of visual and motor neuronal processing both contribute to the delays in the behavioural response. As increases in gamma activity are often observed with increases in attention and effort, increased gamma activities in the patient group may reflect compensatory neural activity during task performance. This is the first study to investigate neural function in real-time during cognitive performance in paediatric brain tumour patients. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Effect of light-emitting diode colour temperature on magnifier reading performance of the visually impaired.

    Science.gov (United States)

    Wolffsohn, James S; Palmer, Eshmael; Rubinstein, Martin; Eperjesi, Frank

    2012-09-01

    As light-emitting diodes become more common as the light source for low vision aids, the effect of illumination colour temperature on magnifier reading performance was investigated. Reading ability (maximum reading speed, critical print size, threshold near visual acuity) using Radner charts and subjective preference was assessed for 107 participants with visual impairment using three stand magnifiers with light emitting diode illumination colour temperatures of 2,700 K, 4,500 K and 6,000 K. The results were compared with distance visual acuity, prescribed magnification, age and the primary cause of visual impairment. Reading speed, critical print size and near visual acuity were unaffected by illumination colour temperature (p > 0.05). Reading metrics decreased with worsening acuity and higher levels of prescribed magnification but acuity was unaffected by age. Each colour temperature was preferred and disliked by a similar number of patients and was unrelated to distance visual acuity, prescribed magnification and age (p > 0.05). Patients had better near acuity (p = 0.002), critical print size (p = 0.034) and maximum reading speed (p colour temperature illumination. A range of colour temperature illuminations should be offered to all visually impaired individuals prescribed with an optical magnifier for near tasks to optimise subjective and objective benefits. © 2012 The Authors. Clinical and Experimental Optometry © 2012 Optometrists Association Australia.

  5. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  6. Central and peripheral visual performance in myopes: contact lenses versus spectacles.

    Science.gov (United States)

    Ehsaei, Asieh; Chisholm, Catharine M; MacIsaac, Jessica C; Mallen, Edward A H; Pacey, Ian E

    2011-06-01

    Myopia is known to degrade visual performance with both optical and retinal changes implicated. Whether contact lenses or spectacles provide better visual performance for myopes is still under debate. The purpose of this study was to examine central and peripheral visual function in myopic subjects corrected with contact lenses versus spectacles. Size thresholds were measured at 13 locations for 20 myopic subjects (mean spherical equivalent refractive error (SE): -6.43±1.22 D and cylinder power: -0.23±0.22 D) corrected with contact lenses (new etafilcon A contact lens, fitted 15 min prior to measurements) versus spectacles. Measurements were taken at both low (δl/l=14%) and high (δl/l=100%) contrast levels. The data were analysed using one way repeated-measures ANOVA. Size thresholds increased with eccentricity in a similar manner for both forms of optical correction. Repeated-measures ANOVA showed no statistically significant difference in central and peripheral visual performance between the two forms of correction for both low and high contrast tasks. The outcome remained the same following correction for spectacle magnification. Eye care practitioners can be confident that modern soft contact lenses do not impair visual performance compared to spectacle lenses for the majority of myopes. Copyright © 2011 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  7. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations.

    Science.gov (United States)

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task.

  8. Visualization and modeling of impurity atom migration for superdiffusion in semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Wada, T. [Nagoya Sangyo University, Aichi (Japan); Kojiguchi, K. [Nagoya Sangyo University, Aichi (Japan); Nagao, H. [Graduate School of information Science, Nagoya University (Japan); Fujimoto, H. [Daido institute of Technology, Nagoya (Japan)]. E-mail: fujimoto@daido-it.ac.jp

    2006-04-01

    Radiation-enhanced superdiffusion in two-layered structures, comprised of an impurity overlayer and a semiconductor substrate, subjected to electron beam irradiation is modeled and visualized using computer graphics animation. The important and experimentally observed large sticking probabilities of impurities at the wafer surface were modeled in the algorithm, and the animation was found to behave as expected under irradiation. Programming of the animation algorithm was performed using an object modeling technique. The animation generated a continuous display of radiation-enhanced superdiffusion that was qualitatively consistent with experimental observations, thereby facilitating understanding of the superdiffusion process.

  9. Objective and subjective visual performance of multifocal contact lenses: pilot study.

    Science.gov (United States)

    Vasudevan, Balamurali; Flores, Michael; Gaib, Sara

    2014-06-01

    The aim of the present study was to compare the objective and subjective visual performance of three different soft multifocal contact lenses. 10 subjects (habitual soft contact lens wearers) between the ages of 40 and 45 years participated in the study. Three different multifocal silicone hydrogel contact lenses (Acuvue Oasys, Air Optix and Biofinity) were fit within the same visit. All the lenses were fit according to the manufacturers' recommendation using the respective fitting guide. Visual performance tests included low and high contrast distance and near visual acuity, contrast sensitivity, range of clear vision and through-focus curve. Objective visual performance tests included measurement of open field accommodative response at different defocus levels and optical aberrations at different viewing distances. Accommodative response was not significantly different between the three types of multifocal contact lenses at each of the accommodative stimulus levels (p>0.05). Accommodative lag increased for higher stimulus levels for all 3 types of contact lenses. Ocular aberrations were not significantly different between these 3 contact lens designs at each of the different viewing distances (p>0.05). In addition, optical aberrations did not significantly differ between different viewing distances for any of these lenses (p>0.05). ANOVA revealed no significant difference in high and low contrast distance visual acuity as well as near visual acuity and contrast sensitivity function between the 3 multifocal contact lenses and spectacles (p>0.05). There was no statistically significant difference in accommodative response, optical aberrations or visual performance between the 3 multifocal contact lenses in early presbyopes. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  10. Visualization of protein folding funnels in lattice models.

    Science.gov (United States)

    Oliveira, Antonio B; Fatore, Francisco M; Paulovich, Fernando V; Oliveira, Osvaldo N; Leite, Vitor B P

    2014-01-01

    Protein folding occurs in a very high dimensional phase space with an exponentially large number of states, and according to the energy landscape theory it exhibits a topology resembling a funnel. In this statistical approach, the folding mechanism is unveiled by describing the local minima in an effective one-dimensional representation. Other approaches based on potential energy landscapes address the hierarchical structure of local energy minima through disconnectivity graphs. In this paper, we introduce a metric to describe the distance between any two conformations, which also allows us to go beyond the one-dimensional representation and visualize the folding funnel in 2D and 3D. In this way it is possible to assess the folding process in detail, e.g., by identifying the connectivity between conformations and establishing the paths to reach the native state, in addition to regions where trapping may occur. Unlike the disconnectivity maps method, which is based on the kinetic connections between states, our methodology is based on structural similarities inferred from the new metric. The method was developed in a 27-mer protein lattice model, folded into a 3×3×3 cube. Five sequences were studied and distinct funnels were generated in an analysis restricted to conformations from the transition-state to the native configuration. Consistent with the expected results from the energy landscape theory, folding routes can be visualized to probe different regions of the phase space, as well as determine the difficulty in folding of the distinct sequences. Changes in the landscape due to mutations were visualized, with the comparison between wild and mutated local minima in a single map, which serves to identify different trapping regions. The extension of this approach to more realistic models and its use in combination with other approaches are discussed.

  11. Learning following prenatal alcohol exposure: performance on verbal and visual multitrial tasks.

    Science.gov (United States)

    Kaemingk, Kris L; Mulvaney, Shelagh; Halverson, Patricia Tanner

    2003-01-01

    Verbal Learning deficits have been reported following prenatal alcohol exposure (PAE). This study examined verbal and visual multitrial learning in children with fetal alcohol syndrome (FAS) or fetal alcohol effects (FAE) and controls matched on age and gender from the same community. In this study, the FAS/FAE group's immediate memory on the Verbal Learning and Visual Learning tasks from the Wide Range Assessment of Memory and Learning (WRAML) was significantly weaker than that of the control group. Although the FAS/FAE group also recalled significantly less information after a delay, they did retain an equivalent proportion of the visual and verbal information as compared to the control group. Thus, the overall pattern of performance on both verbal and visual measures was consistent with that observed in previous studies of Verbal Learning: despite weaker learning, the FAS/FAE group's relative retention of information was no different than that of controls.

  12. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  13. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    Science.gov (United States)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  14. The development of the LV Prasad-Functional Vision Questionnaire: a measure of functional vision performance of visually impaired children.

    Science.gov (United States)

    Gothwal, Vijaya K; Lovie-Kitchin, Jan E; Nutheti, Rishita

    2003-09-01

    To develop a reliable and valid questionnaire (the LV Prasad-Functional Vision Questionnaire, LVP-FVQ) to assess self-reported functional vision problems of visually impaired school children. The LVP-FVQ consisting of 19 items was administered verbally to 78 visually impaired Indian school children aged 8 to 18 years. Responses for each item were rated on a 5-point scale. A Rasch analysis of the ordinal difficulty ratings was used to estimate interval measures of perceived visual ability for functional vision performance. Content validity of the LVP-FVQ was shown by the good separation index (3.75) and high reliability scores (0.93) for the item parameters. Construct validity was shown with good model fit statistics. Criterion validity of the LVP-FVQ was shown by good discrimination among subjects who answered "seeing much worse" versus "as well as"; "seeing much worse" versus "as well as/a little worse" and "seeing much worse" versus "a little worse," compared with their normal-sighted friends. The task that required the least visual ability was "walking alone in the corridor at school"; the task that required the most was "reading a textbook at arm's length." The estimated person measures of visual ability were linear with logarithm of the minimum angle of resolution (logMAR) acuity and the binocular high contrast distance visual acuity accounted for 32.6% of the variability in the person measure. The LVP-FVQ is a reliable, valid, and simple questionnaire that can be used to measure functional vision in visually impaired children in developing countries such as India.

  15. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  16. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  17. Learning to perform a new movement with robotic assistance: comparison of haptic guidance and visual demonstration

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer DJ

    2006-08-01

    Full Text Available Abstract Background Mechanical guidance with a robotic device is a candidate technique for teaching people desired movement patterns during motor rehabilitation, surgery, and sports training, but it is unclear how effective this approach is as compared to visual demonstration alone. Further, little is known about motor learning and retention involved with either robot-mediated mechanical guidance or visual demonstration alone. Methods Healthy subjects (n = 20 attempted to reproduce a novel three-dimensional path after practicing it with mechanical guidance from a robot. Subjects viewed their arm as the robot guided it, so this "haptic guidance" training condition provided both somatosensory and visual input. Learning was compared to reproducing the movement following only visual observation of the robot moving along the path, with the hand in the lap (the "visual demonstration" training condition. Retention was assessed periodically by instructing the subjects to reproduce the path without robotic demonstration. Results Subjects improved in ability to reproduce the path following practice in the haptic guidance or visual demonstration training conditions, as evidenced by a 30–40% decrease in spatial error across 126 movement attempts in each condition. Performance gains were not significantly different between the two techniques, but there was a nearly significant trend for the visual demonstration condition to be better than the haptic guidance condition (p = 0.09. The 95% confidence interval of the mean difference between the techniques was at most 25% of the absolute error in the last cycle. When asked to reproduce the path repeatedly following either training condition, the subjects' performance degraded significantly over the course of a few trials. The tracing errors were not random, but instead were consistent with a systematic evolution toward another path, as if being drawn to an "attractor path". Conclusion These results indicate

  18. SDBI 1904: Human Factors Assessment of Vibration Effects on Visual Performance during Launch

    Science.gov (United States)

    Thompson, Shelby G.; Holden, Kritina; Root, Phillip; Ebert, Douglas; Jones, Jeffery; Adelstein, Bernard

    2009-01-01

    The primary objective of the of Human Factors Short Duration Bioastronautics Investigation (SDBI) 1904 is to determine visual performance limits during operational vibration and g-loads, specifically through the determination of minimal usable font sized using Orion-type display formats. Currently there is little to no data available to quantify human visual performance under these extreme conditions. Existing data on shuttle vibration magnitude and frequency is incomplete, does not address sear and crew vibration in the current configuration, and does not address human visual performance. There have been anecdotal reports of performance decrements from shuttle crews, but no structured data has been collected. The SDBI is a companion effort to the Detailed Test Objective (DTO) 695, which will measure shuttle seat accelerations (vibration) during ascent. Data fro the SDBI will serve an important role in interpreting the DTO vibration data. This data will be collected during the ascent phase of three shuttle missions (STS-119, 127, and 128). Both SDBI1904 and DTO 695 are low impact with respect to flight resources, and combined they represent an efficient and focused problem solving approach. The SDBI and DTO data will be correlated to determine the nature of perceived visual performance under varying vibrations and g-loads. This project will provide: 1) Immediate data for developing preliminary human performance vibration requirements; 2) Flight validated inputs for ongoing and future ground-based research; and 3) Information of functional needs that will drive Orion display format design decisions.

  19. 3D-printer visualization of neuron models

    Directory of Open Access Journals (Sweden)

    Robert A McDougal

    2015-06-01

    Full Text Available Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the wireframe tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG. We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  20. 3D-printer visualization of neuron models.

    Science.gov (United States)

    McDougal, Robert A; Shepherd, Gordon M

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  1. Creating Shared Mental Models: The Support of Visual Language

    Science.gov (United States)

    Landman, Renske B.; van den Broek, Egon L.; Gieskes, José F. B.

    Cooperative design involves multiple stakeholders that often hold different ideas of the problem, the ways to solve it, and to its solutions (i.e., mental models; MM). These differences can result in miscommunication, misunderstanding, slower decision making processes, and less chance on cooperative decisions. In order to facilitate the creation of a shared mental model (sMM), visual languages (VL) are often used. However, little scientific foundation is behind this choice. To determine whether or not this gut feeling is justified, a research was conducted in which various stakeholders had to cooperatively redesign a process chain, with and without VL. To determine whether or not a sMM was created, scores on agreement in individual MM, communication, and cooperation were analyzed. The results confirmed the assumption that VL can indeed play an important role in the creation of sMM and, hence, can aid the processes of cooperative design and engineering.

  2. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer H

    2012-01-01

    visualization. Specifically we focus on the generation of summary maps of a nonlinear classifier, that reveal how the classifier works in different parts of the input domain. Each of the maps includes sign information, unlike earlier related methods. The sign information allows the researcher to assess in which......Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...

  3. Visual Representations Of Non-Separable Spatiotemporal Covariance Models

    Science.gov (United States)

    Kolovos, A.; Christakos, G.; Hristopulos, D. T.; Serre, M. L.

    2003-12-01

    Natural processes that relate to climatic variability (such as air circulation, air-water and air-soil energy exchanges) contain inherently stochastic components. Spatiotemporal random fields are frequently employed to model such processes and deal with the uncertainty involved. Covariance functions are statistical tools that are used to express correlations between process values across space and time. This work focuses on a review and visual representation of a series of useful covariance models that have been introduced in the Modern Spatiotemporal Geostatistics literature. Some of their important features are examined and their application can significantly improve the interpretation of space/time correlations that affect the long-term climatic evolution both on a local or a global scale.

  4. Development of visual working memory and distractor resistance in relation to academic performance.

    Science.gov (United States)

    Tsubomi, Hiroyuki; Watanabe, Katsumi

    2017-02-01

    Visual working memory (VWM) enables active maintenance of goal-relevant visual information in a readily accessible state. The storage capacity of VWM is severely limited, often as few as 3 simple items. Thus, it is crucial to restrict distractor information from consuming VWM capacity. The current study investigated how VWM storage and distractor resistance develop during childhood in relation to academic performance in the classroom. Elementary school children (7- to 12-year-olds) and adults (total N=140) completed a VWM task with and without visual/verbal distractors during the retention period. The results showed that VWM performance with and without distractors developed at similar rates until reaching adult levels at 10years of age. In addition, higher VWM performance without distractors was associated with higher academic scores in literacy (reading and writing), mathematics, and science for the younger children (7- to 9-year-olds), whereas these academic scores for the older children (10- to 12-year-olds) were associated with VWM performance with visual distractors. Taken together, these results suggest that VWM storage and distractor resistance develop at a similar rate, whereas their contributions to academic performance differ with age. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A Hyperbolic Ontology Visualization Tool for Model Application Programming Interface Documentation

    Science.gov (United States)

    Hyman, Cody

    2011-01-01

    Spacecraft modeling, a critically important portion in validating planned spacecraft activities, is currently carried out using a time consuming method of mission to mission model implementations and integration. A current project in early development, Integrated Spacecraft Analysis (ISCA), aims to remedy this hindrance by providing reusable architectures and reducing time spent integrating models with planning and sequencing tools. The principle objective of this internship was to develop a user interface for an experimental ontology-based structure visualization of navigation and attitude control system modeling software. To satisfy this, a number of tree and graph visualization tools were researched and a Java based hyperbolic graph viewer was selected for experimental adaptation. Early results show promise in the ability to organize and display large amounts of spacecraft model documentation efficiently and effectively through a web browser. This viewer serves as a conceptual implementation for future development but trials with both ISCA developers and end users should be performed to truly evaluate the effectiveness of continued development of such visualizations.

  6. Development of internal models and predictive abilities for visual tracking during childhood.

    Science.gov (United States)

    Ego, Caroline; Yüksel, Demet; Orban de Xivry, Jean-Jacques; Lefèvre, Philippe

    2016-01-01

    The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. Copyright © 2016 the American Physiological Society.

  7. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  8. Influence of visual feedback on human task performance in ITER remote handling

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, Gwendolijn Y.R., E-mail: g.schropp@heemskerk-innovative.nl [Utrecht University, Utrecht (Netherlands); Heemskerk Innovative Technology, Noordwijk (Netherlands); Heemskerk, Cock J.M. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann [Helmholtz Institute-Utrecht University, Utrecht (Netherlands); Elzendoorn, Ben S.Q. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands); Bult, David [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The performance of human operators in an ITER-like test facility for remote handling. Black-Right-Pointing-Pointer Different sources of visual feedback influence how fast one can complete a maintenance task. Black-Right-Pointing-Pointer Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  9. Comprehensive visual field test & diagnosis system in support of astronaut health and performance

    Science.gov (United States)

    Fink, Wolfgang; Clark, Jonathan B.; Reisman, Garrett E.; Tarbell, Mark A.

    Long duration spaceflight, permanent human presence on the Moon, and future human missions to Mars will require autonomous medical care to address both expected and unexpected risks. An integrated non-invasive visual field test & diagnosis system is presented for the identification, characterization, and automated classification of visual field defects caused by the spaceflight environment. This system will support the onboard medical provider and astronauts on space missions with an innovative, non-invasive, accurate, sensitive, and fast visual field test. It includes a database for examination data, and a software package for automated visual field analysis and diagnosis. The system will be used to detect and diagnose conditions affecting the visual field, while in space and on Earth, permitting the timely application of therapeutic countermeasures before astronaut health or performance are impaired. State-of-the-art perimetry devices are bulky, thereby precluding application in a spaceflight setting. In contrast, the visual field test & diagnosis system requires only a touchscreen-equipped computer or touchpad device, which may already be in use for other purposes (i.e., no additional payload), and custom software. The system has application in routine astronaut assessment (Clinical Status Exam), pre-, in-, and post-flight monitoring, and astronaut selection. It is deployable in operational space environments, such as aboard the International Space Station or during future missions to or permanent presence on the Moon and Mars.

  10. Feasibility and performance evaluation of generating and recording visual evoked potentials using ambulatory Bluetooth based system.

    Science.gov (United States)

    Ellingson, Roger M; Oken, Barry

    2010-01-01

    Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.

  11. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model.

    Science.gov (United States)

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses' placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF) was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient's eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  12. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    Directory of Open Access Journals (Sweden)

    Georgios Zoulinakis

    2017-01-01

    Full Text Available Purpose. To design an intraocular telescopic system (ITS for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient’s eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  13. Different developmental trajectories across feature types support a dynamic field model of visual working memory development.

    Science.gov (United States)

    Simmering, Vanessa R; Miller, Hilary E; Bohache, Kevin

    2015-05-01

    Research on visual working memory has focused on characterizing the nature of capacity limits as "slots" or "resources" based almost exclusively on adults' performance with little consideration for developmental change. Here we argue that understanding how visual working memory develops can shed new light onto the nature of representations. We present an alternative model, the Dynamic Field Theory (DFT), which can capture effects that have been previously attributed either to "slot" or "resource" explanations. The DFT includes a specific developmental mechanism to account for improvements in both resolution and capacity of visual working memory throughout childhood. Here we show how development in the DFT can account for different capacity estimates across feature types (i.e., color and shape). The current paper tests this account by comparing children's (3, 5, and 7 years of age) performance across different feature types. Results showed that capacity for colors increased faster over development than capacity for shapes. A second experiment confirmed this difference across feature types within subjects, but also showed that the difference can be attenuated by testing memory for less familiar colors. Model simulations demonstrate how developmental changes in connectivity within the model-purportedly arising through experience-can capture differences across feature types.

  14. Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.

    Science.gov (United States)

    Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M

    2017-04-01

    Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Phonological and visual distinctiveness effects in syllogistic reasoning: implications for mental models theory.

    Science.gov (United States)

    Ball, Linden J; Quayle, Jeremy D

    2009-09-01

    Two experiments are reported in which the representational distinctiveness of terms within categorical syllogisms was manipulated in order to examine the assumption of mental models theory that abstract, spatially based representations underpin deduction. In Experiment 1, participants evaluated conclusion validity for syllogisms containing either phonologically distinctive terms (e.g., harks, paps, and fids) or phonologically nondistinctive terms (e.g., fuds, fods, and feds). Logical performance was enhanced with the distinctive contents, suggesting that the phonological properties of syllogism terms can play an important role in deduction. In Experiment 2, participants received either the phonological materials from Experiment 1 or syllogisms involving distinctive or nondistinctive visual contents. Logical inference was again enhanced for the distinctive contents, whether phonological or visual in nature. Our findings suggest a broad involvement of multimodal information in syllogistic reasoning and question the assumed primacy of abstract, spatially organized representations in deduction, as is claimed by mental models theorists.

  16. A. Saanko luvan?” : sound and visual performance (appendix 1) : B. Music performance as a whole visual and sound - canons and diversions

    OpenAIRE

    Thevenot, Cecile

    2013-01-01

    When music is performed, in any case, it's part of a representation. That is the main question exposed in this written work. How to understand a concert as a whole including its visual and sound sides whether it's thought as part of a creation or pretend to not exist. The first part introduces how the music is directed and represented in traditional classical concert spaces. This thesis attempts to approach in a wider way a common classical concert event. Why not to present the whole ce...

  17. The role of sensory ocular dominance on through-focus visual performance in monovision presbyopia corrections.

    Science.gov (United States)

    Zheleznyak, Len; Alarcon, Aixa; Dieter, Kevin C; Tadin, Duje; Yoon, Geunyoung

    2015-01-01

    Monovision presbyopia interventions exploit the binocular nature of the visual system by independently manipulating the optical properties of the two eyes. It is unclear, however, how individual variations in ocular dominance affect visual function in monovision corrections. Here, we examined the impact of sensory ocular dominance on visual performance in both traditional and modified monovision presbyopic corrections. We recently developed a binocular adaptive optics vision simulator to correct subjects' native aberrations and induce either modified monovision (1.5 D anisometropia, spherical aberration of +0.1 and -0.4 μm in distance and near eyes, respectively, over 4 mm pupils) or traditional monovision (1.5 D anisometropia). To quantify both the sign and the degree of ocular dominance, we utilized binocular rivalry to estimate stimulus contrast ratios that yield balanced dominance durations for the two eyes. Through-focus visual acuity and contrast sensitivity were measured under two conditions: (a) assigning dominant and nondominant eye to distance and near, respectively, and (b) vice versa. The results revealed that through-focus visual acuity was unaffected by ocular dominance. Contrast sensitivity, however, was significantly improved when the dominant eye coincided with superior optical quality. We hypothesize that a potential mechanism behind this observation is an interaction between ocular dominance and binocular contrast summation, and thus, assignment of the dominant eye to distance or near may be an important factor to optimize contrast threshold performance at different object distances in both modified and traditional monovision.

  18. Ocular responses and visual performance after high-acceleration force exposure.

    Science.gov (United States)

    Tsai, Ming-Ling; Liu, Chun-Cheng; Wu, Yi-Chang; Wang, Chih-Hung; Shieh, Pochuen; Lu, Da-Wen; Chen, Jiann-Torng; Horng, Chi-Ting

    2009-10-01

    To evaluate ocular responses and visual performance after high-acceleration force exposure. Fourteen men were enrolled in the study. A human centrifuge was used to induce nine times the acceleration force in the head-to-toe (z-axis) direction (+9 Gz force). Visual performance was evaluated using the ETDRS (Early Treatment of Diabetic Retinopathy Study) visual chart, and contrast sensitivity (CS) was examined before and after centrifugation. Ocular responses were assessed with biomicroscopy and topographic mapping after gravitational stress. Transient visual acuity reduction (0.02 +/- 0.04 logMar vs. 0.19 +/- 0.07 logMar VA; P acceleration (3.19 +/- 0.26 mm vs. 4.39 +/- 0.27 mm; P spatial frequencies (1.5, 3, and 6 cyc/deg) and did not return to the baseline level by 30 minutes. High-acceleration force may induce transient visual acuity reduction and temporary corneal thickening. Prolonged increase in ACD and pupillary dilation were also observed. The decrease in CS persisted for 30 minutes after centrifugation. The mechanisms underlying these observations are not clear, because there are no previous reports on this topic. Further studies are needed.

  19. Lecture Capture with Real-Time Rearrangement of Visual Elements: Impact on Student Performance

    Science.gov (United States)

    Yu, P.-T.; Wang, B.-Y.; Su, M.-H.

    2015-01-01

    The primary goal of this study is to create and test a lecture-capture system that can rearrange visual elements while recording is still taking place, in such a way that student performance can be positively influenced. The system we have devised is capable of integrating and rearranging multimedia sources, including learning content, the…

  20. Hydro cone lens visual performance and impact on quality of life in irregular corneas.

    Science.gov (United States)

    Ozek, Dilay; Kemer, Ozlem Evren; Bayraktar, Neslihan

    2016-12-01

    The aim of this study is to evaluate the visual performance (efficiency) of HydroCone (Toris K) soft silicon hydrogel lenses in patients with irregular corneas. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  1. The Impact of a Visual Imagery Intervention on Army ROTC Cadets' Marksmanship Performance and Flow Experiences

    Science.gov (United States)

    Rakes, Edward Lee

    2012-01-01

    This investigation used an experimental design to examine how a visual imagery intervention and two levels of challenge would affect the flow experiences and performance of cadets engaged in Army ROTC marksmanship training. I employed MANCOVA analyses, with gender and prior marksmanship training experience as covariates, to assess cadets' (n =…

  2. Motor Skill Performance of Children and Adolescents With Visual Impairments : A Review

    NARCIS (Netherlands)

    Houwen, Suzanne; Visscher, Chris; Lemmink, Koen A. P. M.; Hartman, Esther

    2009-01-01

    This article reviews studies on variables that are related to the motor skill performance of children and adolescents with visual impairments (W). Three major groups of variables are considered (child, environmental, and task). Thirty-nine studies are included in this review, 26 of which examined

  3. Poor Performance on Serial Visual Tasks in Persons with Reading Disabilities: Impaired Working Memory?

    Science.gov (United States)

    Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.

    2008-01-01

    The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…

  4. Doctoral Writing in the Visual and Performing Arts: Two Ends of a Continuum

    Science.gov (United States)

    Paltridge, Brian; Starfield, Sue; Ravelli, Louise; Nicholson, Sarah

    2012-01-01

    Doctoral degrees in the visual and performing arts are a fairly recent entrant in the research higher degree landscape in Australian universities. At the same time, a new kind of doctorate is evolving, a doctorate in which significant aspects of the claim for the doctoral characteristics of originality, mastery and contribution to the field are…

  5. Doctoral Writing in the Visual and Performing Arts: Issues and Debates

    Science.gov (United States)

    Paltridge, Brian; Starfield, Sue; Ravelli, Louise; Nicholson, Sarah

    2011-01-01

    Drawing from a larger study of doctorates in the visual and performing arts, we examine here the diversity of relations which can exist between the creative and written components of a doctoral thesis in these fields in terms of diversity of naming practices for these relations, institutional variation in guidelines and expectations, and…

  6. Improving the Audio Game-Playing Performances of People with Visual Impairments through Multimodal Training

    Science.gov (United States)

    Balan, Oana; Moldoveanu, Alin; Moldoveanu, Florica; Nagy, Hunor; Wersenyi, Gyorgy; Unnporsson, Runar

    2017-01-01

    Introduction: As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory…

  7. The effectiveness of visual simulation training in improving inner circle fielding performance in cricket

    NARCIS (Netherlands)

    Hopwood, M.; Mann, D.L.; Farrow, D.; Neilsen, T.

    2011-01-01

    This study examined the effectiveness of visual-perceptual training for improving fielding performance in cricket. Twelve highly-skilled cricket players completed a video-based decision-making test and an in-situ fielding test before and after a six-week training intervention. During this period,

  8. The Effects of Visual Stimuli on the Spoken Narrative Performance of School-Age African American Children

    Science.gov (United States)

    Mills, Monique T.

    2015-01-01

    Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…

  9. Desempenho visual na correção de miopia com óculos e lentes de contato gelatinosas Visual performance in myopic correction with spectacles and soft contact lenses

    Directory of Open Access Journals (Sweden)

    Breno Barth

    2008-02-01

    with three different soft contact lenses [Acuvue® 2 (Vistacon J&J Vision Care Inc., USA, Biomedics® 55 (Ocular Science, USA, and Focus® 1-2 week (Ciba Vision Corporation, USA]. METHODS: An interventional prospective clinical trial studied a sample of 40 myopic patients (-1.00 to -4.50 sph, with or without astigmatism up to -0.75 cyl. Each patient had one eye randomized to visual performance evaluation. RESULTS: The Zywave aberrometer detected a over refraction and significant difference between Acuvue® 2 and Biomedics® 55 regarding spheric refractive components and spheric equivalent. Both soft contact lenses showed hypercorrection as compared to Focus® 1-2 week. Visual performance was not significantly different with spectacles and the three soft contact lenses in visual acuity and contrast sensitivity measurements. The wavefront analysis detected a significant difference in a third order aberration with and without soft contact lenses, with better visual performance with Acuvue® 2 and Biomedics® 55. CONCLUSION: In visual performance evaluation with spectacles and soft contact lenses the wavefront analysis was a more sensible measurement of visual function when compared to high contrast visual acuity and contrast sensitivity. The evaluation model of visual performance with wavefront analysis developed in this investigation may be useful for further similar studies.

  10. Stochastic sensitivity of a bistable energy model for visual perception

    Science.gov (United States)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  11. cellPACK: a virtual mesoscope to model and visualize structural systems biology.

    Science.gov (United States)

    Johnson, Graham T; Autin, Ludovic; Al-Alusi, Mostafa; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2015-01-01

    cellPACK assembles computational models of the biological mesoscale, an intermediate scale (10-100 nm) between molecular and cellular biology scales. cellPACK's modular architecture unites existing and novel packing algorithms to generate, visualize and analyze comprehensive three-dimensional models of complex biological environments that integrate data from multiple experimental systems biology and structural biology sources. cellPACK is available as open-source code, with tools for validation of models and with 'recipes' and models for five biological systems: blood plasma, cytoplasm, synaptic vesicles, HIV and a mycoplasma cell. We have applied cellPACK to model distributions of HIV envelope protein to test several hypotheses for consistency with experimental observations. Biologists, educators and outreach specialists can interact with cellPACK models, develop new recipes and perform packing experiments through scripting and graphical user interfaces at http://cellPACK.org/.

  12. Relationship between visual motor integration skill and academic performance in kindergarten through third grade.

    Science.gov (United States)

    Taylor Kulp, M

    1999-03-01

    The objective of this study was to examine the relationship between visual motor integration skill and academic performance in kindergarten through third grade. One hundred ninety-one (N = 191) children in kindergarten through third grade (mean age = 7.78 years; 52% male) from an upper-middle class, suburban, primarily Caucasian, elementary school near Cleveland, Ohio were included in this investigation. Visual analysis and visual motor integration skill were assessed with the Beery Developmental Test of Visual Motor Integration (VMI) long form because it is a commonly used test in both optometric and educational practice and has a detailed scoring system. The relationship between performance on the VMI and teachers' ratings of academic achievement was analyzed because teachers' grades are a primary means of assessing school performance. The children's regular classroom teachers rated the children with respect to reading, math, and writing ability. Second and third grade children (N = 98) were also rated on spelling ability. Only experienced teachers were included in the investigation and the validity of the teachers' ratings was substantiated by significant correlations with standardized test scores. Teachers were masked to performance on the VMI until the rating was completed. The Stanford Diagnostic Reading test, 4th edition, was also used as a measure of reading ability in the first graders and the Otis-Lennon School Ability test (OLSAT), 6th edition, was also used as a measure of school-related cognitive ability in the second graders. Performance on the VMI was found to be significantly related to teachers' ratings of the children's reading (p = 0.0001), math (p = 0.0001), writing (p = 0.0001) and spelling (p = 0.0118) ability. An analysis by age group revealed that performance on the VMI was significantly correlated with reading achievement ratings in the 7- and 8- year-olds (pgrade students with average ability). Finally, in order to partially control for

  13. Short-term visual performance of soft multifocal contact lenses for presbyopia

    OpenAIRE

    Jennifer Sha; Bakaraju, Ravi C.; Daniel Tilia; Jiyoon Chung; Shona Delaney; Anna Munro; Klaus Ehrmann; Varghese Thomas; Holden, Brien A.

    2016-01-01

    ABSTRACT Purpose: To compare visual acuity (VA), contrast sensitivity, stereopsis, and subjective visual performance of Acuvue® Oasys® for Presbyopia (AOP), Air Optix® Aqua Multifocal (AOMF), and Air Optix® Aqua Single Vision (AOSV) lenses in patients with presbyopia. Methods: A single-blinded crossover trial was conducted. Twenty patients with mild presbyopia (add ≤+1.25 D) and 22 with moderate/severe presbyopia (add ≥+1.50 D) who wore lenses bilaterally for 1 h, with a minimum...

  14. Visual working memory influences the performance in virtual image-guided surgical intervention.

    Science.gov (United States)

    Hedman, L; Klingberg, T; Enochsson, L; Kjellin, A; Felländer-Tsai, L

    2007-11-01

    This study addresses for the first time the relationship between working memory and performance measures in image-guided instrument navigation with Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) and GI Mentor II (a simulator for gastroendoscopy). In light of recent research on simulator training, it is now prime time to ask why in a search for mechanisms rather than show repeatedly that conventional curriculum for simulation training has effect. The participants in this study were 28 Swedish medical students taking their course in basic surgery. Visual and verbal working memory span scores were assessed by a validated computer program (RoboMemo) and correlated with visual-spatial ability (MRT-A test), total flow experience (flow scale), mental strain (Borg scale), and performance scores in manipulation and diathermy (MD) using Procedicus MIST-VR and GI Mentor 11 (exercises 1 and 3). Significant Pearson's r correlations were obtained between visual working memory span scores for visual data link (a RoboMemo exercise) and movement economy (r = -0.417; p memory span scores in rotating data link (another RoboMemo exercise) and both total time (r = -0.467; p memory for surgical novices may be important for performance in virtual simulator training with two well-known and validated simulators.

  15. Visual performance of single vision and multifocal contact lenses in non-presbyopic myopic eyes.

    Science.gov (United States)

    Fedtke, Cathleen; Bakaraju, Ravi C; Ehrmann, Klaus; Chung, Jiyoon; Thomas, Varghese; Holden, Brien A

    2016-02-01

    To assess visual performance of single vision and multifocal soft contact lenses. At baseline, forty-four myopic participants (aged 18-35 years) were fitted bilaterally with a control lens (AirOptix Aqua). At the four follow-up visits, a total of 16 study lenses (5 single vision, 11 multifocal lenses) were fitted contralaterally. After 1h of lens wear, participants rated (scale 1-10) vision clarity (distance, intermediate and near), magnitude of ghosting at distance, comfort during head movement, and overall comfort. Distance high contrast visual acuity (HCVA), central refraction and higher order aberrations, and contact lens centration were measured. For single vision lenses, vision ratings were not significantly different to the control (p>0.005). The control outperformed Acuvue Oasys, Clariti Monthly and Night and Day in HCVA (mean VA: -0.10 ± 0.07 logMAR, plenses. The Night and Day lens showed greatest differences compared to the control, i.e., C[4, 0] was more positive (plenses, the majority of vision ratings (84%) were better with the control (plenses showed greatest differences for M, C[3, -1] and C[4, 0] at distance and near, and were inferiorly de-centered (plenses had a small impact on visual performance. Lenses featuring multifocality decreased visual performance, in particular when power variations across the optic zone were large and/or the lens was significantly de-centered. Copyright © 2015. Published by Elsevier Ltd.

  16. Effects of loss of visual feedback on performance of two swimming strokes.

    Science.gov (United States)

    Cicciarella, C F

    1982-12-01

    20 subjects, aged 11 to 21 yr., skilled in competitive swimming in both the crawl and the breaststroke, performed a total of 8 timed swimming trials of 25 yd. in both strokes both with and without blindfolds to test the hypothesis that the loss in performance which would occur with loss of visual feedback is related to the complexity of the motor skill being performed. After correction for differences in the speed of each stroke, the loss in speed (performance decrement) in the more complex stroke (crawl) was significantly greater than the decrement in the less complex (breast) stroke.

  17. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    Science.gov (United States)

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate

  18. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    Science.gov (United States)

    2010-08-01

    interfaces. - 5 - This paper describes a computational model of visual search for HCI that integrates a contemporary understanding of visual...such as by associating jewelry with cloth) and provided visual structure to random layouts. Figure 10 shows a layout with semantically-cohesive

  19. Hydrocortisone Counteracts Adverse Stress Effects on Dual-Task Performance by Improving Visual Sensory Processes.

    Science.gov (United States)

    Weckesser, Lisa J; Alexander, Nina C; Kirschbaum, Clemens; Mennigen, Eva; Miller, Robert

    2016-11-01

    The impact of acute stress on executive processes is commonly attributed to glucocorticoid-induced disruptions of the pFC. However, the occipital cortex seems to express a higher density of glucocorticoid receptors. Consequently, acute stress effects on executive processes could as well be mediated by glucocorticoid (e.g., cortisol)-induced alterations of visual sensory processes. To investigate this alternative route of stress action by demarcating the effects of acute stress and cortisol on executive from those on visual sensory processes, 40 healthy young men completed a standardized stress induction (i.e., the Trier Social Stress Test) and control protocol in two consecutive sessions. In addition, they received either a placebo or hydrocortisone (0.12-mg/kg bodyweight) pill and processed a dual and a partial report task to assess their executive and visual sensory processing abilities, respectively. Hydrocortisone administration improved both partial report and dual-task performance as indicated by increased response accuracies and/or decreased RTs. Intriguingly, the hydrocortisone-induced increase in dual-task performance was completely mediated by its impact on partial report performance (i.e., visual sensory processes). Moreover, RT measures in both tasks shared approximately 26% of variance, which was only in part attributable to hydrocortisone administration (ΔR(2) = 8%). By contrast, acute stress selectively impaired dual-task performance (i.e., executive processes), presumably through an alternative route of action. In summary, the present results suggest that cortisol secretion (as mimicked by hydrocortisone administration) may counteract adverse residual stress effects on executive processes by improving visual sensory processes (e.g., the maintenance and amplification of task-relevant sensory information).

  20. Optic ataxia and the dorsal visual steam re-visited: Impairment in bimanual haptic matching performed without vision.

    Science.gov (United States)

    Jackson, Stephen R; Condon, Laura A; Newport, Roger W; Pears, Sally; Husain, Masud; Bajaj, Nin; O'Donoghue, Michael

    2018-01-01

    The 'two visual systems' account proposed by Milner and Goodale (1992) argued that visual perception and the visual control of action depend upon functionally distinct and anatomically separable brain systems: a ventral stream of visual processing that mediates visual perception (object identification and recognition) and a dorsal stream of visual processing mediating visually guided action. Compelling evidence for this proposal was provided by the neuropsychological studies of brain injured patients, in particular the contrasting pattern of impaired and preserved visual processing abilities of the visual object agnostic patient [DF] and optic ataxic patients who it was argued presented with impaired dorsal stream function. Optic ataxia [OA] has thus become a cornerstone of this 'two visual system' account (Pisella et al., 2009). In the current study we re-examine this assumption by investigating how several individuals presenting with OA performed on a bimanual haptic matching task performed without vision, when the bar to be matched was presented haptically or visually. We demonstrate that, unlike neurologically healthy controls who perform the task with high levels of accuracy, all of the optic ataxic patients were unable to perform the task. We interpret this finding as further evidence that the key difficulty experienced by optic ataxic patients across a range of behavioural tasks may be an inability to simultaneously and directly compare two spatial representations so as to compute the difference between them. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    Science.gov (United States)

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  2. Online Education in the Visual and Performing Arts: Strategies for Increasing Learning and Reducing Costs

    Directory of Open Access Journals (Sweden)

    James Wohlpart, Ph.D.

    2006-07-01

    Full Text Available The appropriate use of technology to enhance learning and reduce costs has become a focal point in the discussion of online learning. Significantly, the use of robust teaching and learning platforms, along with videoconferencing and other technological tools, allows for a wide variety of course redesigns that range from the incorporation of online materials into traditional courses to teaching courses fully online. The purpose of this paper is to discuss the use of technology in general education introductory arts and arts appreciation courses, with a particular focus on increasing learning and reducing costs. We describe the successful redesign of a required general education course entitled Understanding the Visual and Performing Arts into a fully online course. Several unique characteristics of the course such as the use of an alternative staffing model, of computer graded practice tests, and of computer graded short essays were particularly effective in the redesign and could be duplicated in other courses even outside the arts. The paper concludes with a discussion of the improved learning that has occurred since the redesign was completed.

  3. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    Science.gov (United States)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  4. Design, characterization and visual performance of a new multizone contact lens

    CERN Document Server

    Rodriguez-Vallejo, Manuel; Monsoriu, Juan A; Furlan, Walter D

    2016-01-01

    Objectives: To analyze the whole process involved in the production of a new bifocal Multizone Contact Lens (MCL) for presbyopia. Methods: The optical quality of a new MCL was evaluated by ray tracing software in a model eye with pupil different diameters with the lens centered and decentered. A stock of low addition (+1.5 D) MCL for presbyopia was ordered for manufacturing. Power profiles were measured with a contact lens power mapper, processed with a custom software and compared with the theoretical design. Nine lenses from the stock were fitted to presbyopic subjects and the visual performance was evaluated with new APPs for iPad Retina. Results: Numerical simulations showed that the trough the focus curve provided by MCL has an extended depth of focus. The optical quality was not dependent on pupil size and only decreased for lens decentered with a pupil diameter of 4.5 mm. The manufactured MCL showed a smoothed power profile with a less-defined zones. The bias between experimental and theoretical zone s...

  5. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  6. Comparative Visual Analysis of Structure-Performance Relations in Complex Bulk-Heterojunction Morphologies

    KAUST Repository

    Aboulhassan, A.

    2017-07-04

    The structure of Bulk-Heterojunction (BHJ) materials, the main component of organic photovoltaic solar cells, is very complex, and the relationship between structure and performance is still largely an open question. Overall, there is a wide spectrum of fabrication configurations resulting in different BHJ morphologies and correspondingly different performances. Current state-of-the-art methods for assessing the performance of BHJ morphologies are either based on global quantification of morphological features or simply on visual inspection of the morphology based on experimental imaging. This makes finding optimal BHJ structures very challenging. Moreover, finding the optimal fabrication parameters to get an optimal structure is still an open question. In this paper, we propose a visual analysis framework to help answer these questions through comparative visualization and parameter space exploration for local morphology features. With our approach, we enable scientists to explore multivariate correlations between local features and performance indicators of BHJ morphologies. Our framework is built on shape-based clustering of local cubical regions of the morphology that we call patches. This enables correlating the features of clusters with intuition-based performance indicators computed from geometrical and topological features of charge paths.

  7. Evaluating survival model performance: a graphical approach.

    Science.gov (United States)

    Mandel, M; Galai, N; Simchen, E

    2005-06-30

    In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics. Copyright 2005 John Wiley & Sons, Ltd.

  8. Visual performance for trip hazard detection when using incandescent and led miner cap lamps.

    Science.gov (United States)

    Sammarco, John J; Gallagher, Sean; Reyes, Miguel

    2010-04-01

    Accident data for 2003-2007 indicate that slip, trip, and falls (STFs) are the second leading accident class (17.8%, n=2,441) of lost-time injuries in underground mining. Proper lighting plays a critical role in enabling miners to detect STF hazards in this environment. Often, the only lighting available to the miner is from a cap lamp worn on the miner's helmet. The focus of this research was to determine if the spectral content of light from light-emitting diode (LED) cap lamps enabled visual performance improvements for the detection of tripping hazards as compared to incandescent cap lamps that are traditionally used in underground mining. A secondary objective was to determine the effects of aging on visual performance. The visual performance of 30 subjects was quantified by measuring each subject's speed and accuracy in detecting objects positioned on the floor both in the near field, at 1.83 meters, and far field, at 3.66 meters. Near field objects were positioned at 0 degrees and +/-20 degrees off axis, while far field objects were positioned at 0 degrees and +/-10 degrees off axis. Three age groups were designated: group A consisted of subjects 18 to 25 years old, group B consisted of subjects 40 to 50 years old, and group C consisted of subjects 51 years and older. Results of the visual performance comparison for a commercially available LED, a prototype LED, and an incandescent cap lamp indicate that the location of objects on the floor, the type of cap lamp used, and subject age all had significant influences on the time required to identify potential trip hazards. The LED-based cap lamps enabled detection times that were an average of 0.96 seconds faster compared to the incandescent cap lamp. Use of the LED cap lamps resulted in average detection times that were about 13.6% faster than those recorded for the incandescent cap lamp. The visual performance differences between the commercially available LED and prototype LED cap lamp were not statistically

  9. The influence of assistive technology devices on the performance of activities by visually impaired

    Directory of Open Access Journals (Sweden)

    Suzana Rabello

    2014-04-01

    Full Text Available Objective: To establish the influence of assistive technology devices (ATDs on the performance of activities by visually impaired schoolchildren in the resource room. Methods: A qualitative study that comprised observation and an educational intervention in the resource room. The study population comprised six visually impaired schoolchildren aged 12 to 14 years old. The participants were subjected to an eye examination, prescribed ATDs comprising optical and non-optical devices, and provided an orientation on the use of computers. The participants were assessed based on eye/object distance, font size, and time to read a computer screen and printed text. Results: The ophthalmological conditions included corneal opacity, retinochoroiditis, retinopathy of prematurity, aniridia, and congenital cataracts. Far visual acuity varied from 20/200 to 20/800 and near visual acuity from 0.8 to 6 M. Telescopes, spherical lenses, and support magnifying glasses were prescribed. Three out of five participants with low vision after intervention could decrease the font size on the screen computer, and most participants (83.3% reduced their reading time at the second observation session. Relative to the printed text, all the participants with low vision were able to read text written in smaller font sizes and reduced their reading time at the second observation session. Conclusion: Reading skills improved after the use of ATDs, which allowed the participants to perform their school tasks equally to their classmates.

  10. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    Science.gov (United States)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  11. An exploratory factor analysis of visual performance in a large population.

    Science.gov (United States)

    Bosten, J M; Goodbourn, P T; Bargary, G; Verhallen, R J; Lawrance-Owen, A J; Hogg, R E; Mollon, J D

    2017-03-16

    A factor analysis was performed on 25 visual and auditory performance measures from 1060 participants. The results revealed evidence both for a factor relating to general perceptual performance, and for eight independent factors that relate to particular perceptual skills. In an unrotated PCA, the general factor for perceptual performance accounted for 19.9% of the total variance in the 25 performance measures. Following varimax rotation, 8 consistent factors were identified, which appear to relate to (1) sensitivity to medium and high spatial frequencies, (2) auditory perceptual ability (3) oculomotor speed, (4) oculomotor control, (5) contrast sensitivity at low spatial frequencies, (6) stereo acuity, (7) letter recognition, and (8) flicker sensitivity. The results of a hierarchical cluster analysis were consistent with our rotated factor solution. We also report correlations between the eight performance factors and other (non-performance) measures of perception, demographic and anatomical measures, and questionnaire items probing other psychological variables. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Integrated Visualization Environment for Science Mission Modeling Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work will provide NASA with an integrated visualization environment providing greater insight and a more intuitive representation of large technical...

  13. A comparison of methods used to evaluate mobility performance in the visually impaired.

    Science.gov (United States)

    Warrian, Kevin J; Katz, L Jay; Myers, Jonathan S; Moster, Marlene R; Pro, Michael J; Wizov, Sheryl S; Spaeth, George L

    2015-01-01

    To compare three different approaches to measuring mobility performance when evaluating the visually impaired. 488 participants, including 192 glaucoma, 112 age-related macular degeneration, 91 diabetic retinopathy and 93 healthy volunteers, completed the Assessment of Disability Related to Vision (ADREV) mobility course. The performance of participants on the mobility course was evaluated by noting errors made and time required for completion. Errors noted and time taken were compared using multivariate logistic regression to determine which measurement better differentiated patients with visual disease from healthy volunteers. Multivariate logistic regression was also used to evaluate the combined metric of ADREV errors divided by time to determine its ability to discriminate participants with visual disease from healthy volunteers. Errors noted and time taken while ambulating through the standardised mobility course shared a weak but statistically significant association (Pearson's r=0.36, p<0.05). After controlling for demographic and medical comorbidities, logistic regression analysis revealed that errors noted were better at discriminating individuals with visual disease from healthy volunteers (OR 2.8-4.9, 95% CI 1.5 to 10.3) compared with the time taken for mobility course completion (OR 1.1, 95% CI 1.0 to 1.2). These findings were consistent across all comparisons between healthy volunteers and participants with each type of visual impairment. Finally, the combined metric of ADREV errors divided by time was far more predictive of visual disease compared with either time taken or errors noted during mobility testing (OR 11.0-17.7, 95% CI 3.6 to 77.1). A validated scoring system based on errors is more effective when assessing visual disability during mobility testing than recording the time taken for course completion. The combined metric of ADREV errors noted divided by time taken was most predictive of all the methods used to evaluate visual disability

  14. The float model: visualizing personal reflection in healthcare.

    Science.gov (United States)

    Aukes, Leo C; Cohen-Schotanus, Janke; Zwierstra, Rein P; Slaets, Joris Pj

    2009-05-01

    Healthcare students and practitioners need to be able to critically assess themselves and their actions in order to learn from their experiences and improve their care of patients. Students' behaviours can be directly observed and faculty can provide direct feedback on it, when necessary. But 'reflection', a mechanism for assessing one's self, is less visible and often remains an abstract notion that is difficult to understand, use, and assess. We designed an educational model to help healthcare educators and learners visualize reflection. We posit that it can provide a greater understanding of what reflection is, how it works and how to facilitate its development and use by individuals. As a metaphor we used the angler's (fisherman's) float, which to function properly must stand balanced and steady in the water. Likewise, healthcare practitioners try to maintain an upright balance to be able to learn and work effectively. The visible component of the float, the portion above the water, is the 'behaviour'. The hidden, "mental" components of the float are under water: expert thinking (a combination of 'clinical reasoning' and 'scientific thinking'), 'personal reflection', and 'unconscious thoughts'. Each of these mental components plays a role in maintaining balance in learning and working, varying with the circumstances and context. And of course, without water a float has no meaning. In the float model, the water symbolizes the organisational and cultural context in which each practitioner must learn to function. We propose that the float model can be used to reveal the interplay among clinicians' mental processes, which occur unseen "underneath the water" but subtly influence the appropriateness of the behaviour witnessed at the surface. We believe the model can help prevent errors in understanding practitioners' behaviours and their causes, such as when they blur scientific thinking and personal reflection, take reflection as a goal in and of itself, and deny

  15. Visual cognition: a new look at the two-visual systems model.

    Science.gov (United States)

    Jeannerod, M; Jacob, P

    2005-01-01

    In this paper, we argue that no valid comparison between visual representations can arise unless provision is made for three critical properties: their direction of fit, their direction of causation and the level of their conceptual content. The conceptual content in turn is a function of the level of processing. Representations arising from earlier stages of processing of visual input have very little or no conceptual content. Higher order representations get their conceptual content from the connections between visual cognition and other parts of the human cognitive system. The two other critical properties of visual representations are their mind/world direction of fit and their mind/world direction of causation. The output of the semantic processing of visual input has a full mind-to-world direction of fit and a full world-to-mind direction of causation: it visually registers the way the world is and is caused by what it represents. The output of the pragmatic processing yields information for the benefit of intentions, which clearly have a world-to-mind direction of fit and a mind-to-world direction of causation. An intention is both the representation of a goal and a cause of the transformation of a goal into a fact. These properties segregate representations specialized for perception from those specialized for action. Perception implies comparison between simultaneously represented and analyzed objects: hence, object perception presupposes the representation of spatial relationships among objects in a coordinate system independent from the perceiver. Spatial relationships carry cues for attributing meaning to an object, so that their processing is actually part of semantic processing of visual information. These considerations lead to a re-evaluation of the role of the two classical pathways of the human visual system: the ventral and the dorsal cortical pathways. The parietal lobe, which has been identified with the dorsal pathway, cannot be considered as

  16. Timing-dependent LTP and LTD in mouse primary visual cortex following different visual deprivation models

    Science.gov (United States)

    Chen, Xia; Fu, Junhong; Cheng, Wenbo; Song, Desheng; Qu, Xiaolei; Yang, Zhuo; Zhao, Kanxing

    2017-01-01

    Visual deprivation during the critical period induces long-lasting changes in cortical circuitry by adaptively modifying neuro-transmission and synaptic connectivity at synapses. Spike timing-dependent plasticity (STDP) is considered a strong candidate for experience-dependent changes. However, the visual deprivation forms that affect timing-dependent long-term potentiation(LTP) and long-term depression(LTD) remain unclear. Here, we demonstrated the temporal window changes of tLTP and tLTD, elicited by coincidental pre- and post-synaptic firing, following different modes of 6-day visual deprivation. Markedly broader temporal windows were found in robust tLTP and tLTD in the V1M of the deprived visual cortex in mice after 6-day MD and DE. The underlying mechanism for the changes seen with visual deprivation in juvenile mice using 6 days of dark exposure or monocular lid suture involves an increased fraction of NR2b-containing NMDAR and the consequent prolongation of NMDAR-mediated response duration. Moreover, a decrease in NR2A protein expression at the synapse is attributable to the reduction of the NR2A/2B ratio in the deprived cortex. PMID:28520739

  17. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    Science.gov (United States)

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the

  18. Effects of angular gain transformations between movement and visual feedback on coordination performance in unimanual circling

    Directory of Open Access Journals (Sweden)

    Martina eRieger

    2014-03-01

    Full Text Available Tool actions are characterized by a transformation (of spatio-temporal and/or force-related characteristics between movements and their resulting consequences in the environment. This transformation has to be taken into account, when planning and executing movement and its existence may affect performance. In the present study we investigated how angular gain transformations between movement and visual feedback during circling movements affect coordination performance. Participants coordinated the visual feedback (feedback dot with a continuously circling stimulus (stimulus dot on a computer screen in order to produce mirror symmetric trajectories of them. The movement angle was multiplied by a gain factor (0.5 to 2; 9 levels before it was presented on the screen. Thus, the angular gain transformations changed the spatio-temporal relationship between the movement and its feedback in visual space, and resulted in a non-constant mapping of movement to feedback positions. Coordination performance was best with gain = 1. With high gains the feedback dot was in lead of the stimulus dot, with small gains it lagged behind. Anchoring (reduced movement variability occurred when the two trajectories were close to each other. Awareness of the transformation depended on the deviation of the gain from 1. In conclusion, the size of an angular gain transformation as well as its mere presence influence performance in a situation in which the mapping of movement positions to visual feedback positions is not constant. When designing machines or tools that involve transformations between movements and their external consequences, one should be aware that the mere presence of angular gains may result in performance decrements and that there can be flaws in the representation of the transformation.

  19. A mouse model of visual perceptual learning reveals alterations in neuronal coding and dendritic spine density in the visual cortex

    Directory of Open Access Journals (Sweden)

    Yan eWang

    2016-03-01

    Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  20. The development of hand-centred visual representations in the primate brain: a computer modelling study using natural visual scenes.

    Directory of Open Access Journals (Sweden)

    Juan Manuel Galeazzi

    2015-12-01

    Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

  1. Stress Induction and Visual Working Memory Performance: The Effects of Emotional and Non-Emotional Stimuli

    Directory of Open Access Journals (Sweden)

    Zahra Khayyer

    2017-05-01

    Full Text Available Background Some studies have shown working memory impairment following stressful situations. Also, researchers have found that working memory performance depends on many different factors such as emotional load of stimuli and gender. Objectives The present study aimed to determine the effects of stress induction on visual working memory (VWM performance among female and male university students. Methods This quasi-experimental research employed a posttest with only control group design (within-group study. A total of 62 university students (32 males and 30 females were randomly selected and allocated to experimental and control groups (mean age of 23.73. Using cold presser test (CPT, stress was induced and then, an n-back task was implemented to evaluate visual working memory function (such as the number of true items, time reactions, and the number of wrong items through emotional and non-emotional pictures. 100 pictures were selected from the international affective picture system (IASP with different valences. Results Results showed that stress impaired different visual working memory functions (P < 0.002 for true scores, P < 0.001 for reaction time, and P < 0.002 for wrong items. Conclusions In general, stress significantly decreases the VWM performances. On the one hand, females were strongly impressed by stress more than males and on the other hand, the VWM performance was better for emotional stimuli than non-emotional stimuli.

  2. Visual Confidence.

    Science.gov (United States)

    Mamassian, Pascal

    2016-10-14

    Visual confidence refers to an observer's ability to judge the accuracy of her perceptual decisions. Even though confidence judgments have been recorded since the early days of psychophysics, only recently have they been recognized as essential for a deeper understanding of visual perception. The reluctance to study visual confidence may have come in part from obtaining convincing experimental evidence in favor of metacognitive abilities rather than just perceptual sensitivity. Some effort has thus been dedicated to offer different experimental paradigms to study visual confidence in humans and nonhuman animals. To understand the origins of confidence judgments, investigators have developed two competing frameworks. The approach based on signal decision theory is popular but fails to account for response times. In contrast, the approach based on accumulation of evidence models naturally includes the dynamics of perceptual decisions. These models can explain a range of results, including the apparently paradoxical dissociation between performance and confidence that is sometimes observed.

  3. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Ramanathan, Arvind [ORNL; Steed, Chad A [ORNL; Pullum, Laura L [ORNL

    2012-01-01

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we build a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.

  4. The Role of Visual Feedback and Creative Exploration for the Improvement of Timing Accuracy in Performing Musical Ornaments

    NARCIS (Netherlands)

    Timmers, R.; Sadakata, M.; Desain, P.W.M.

    2012-01-01

    in developing a visual feedback system for a creative activity such as music performance, the objective is not just to reinforce one particular manner of performing. Instead, a desirable characteristic might be that the visual feedback enhances flexibility and originality, in addition to

  5. The effect of context and audio-visual modality on emotions elicited by a musical performance.

    Science.gov (United States)

    Coutinho, Eduardo; Scherer, Klaus R

    2017-07-01

    In this work, we compared emotions induced by the same performance of Schubert Lieder during a live concert and in a laboratory viewing/listening setting to determine the extent to which laboratory research on affective reactions to music approximates real listening conditions in dedicated performances. We measured emotions experienced by volunteer members of an audience that attended a Lieder recital in a church (Context 1) and emotional reactions to an audio-video-recording of the same performance in a university lecture hall (Context 2). Three groups of participants were exposed to three presentation versions in Context 2: (1) an audio-visual recording, (2) an audio-only recording, and (3) a video-only recording. Participants achieved statistically higher levels of emotional convergence in the live performance than in the laboratory context, and the experience of particular emotions was determined by complex interactions between auditory and visual cues in the performance. This study demonstrates the contribution of the performance setting and the performers' appearance and nonverbal expression to emotion induction by music, encouraging further systematic research into the factors involved.

  6. Adapting models of visual aesthetics for personalized content creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2012-01-01

    This paper introduces a search-based approach to personalized content generation with respect to visual aesthetics. The approach is based on a two-step adaptation procedure where (1) the evaluation function that characterizes the content is adjusted to match the visual aesthetics of users and (2......) the content itself is optimized based on the personalized evaluation function. To test the efficacy of the approach we design fitness functions based on universal properties of visual perception, inspired by psychological and neurobiological research. Using these visual properties we generate aesthetically...... pleasing 2D game spaceships via neuroevolutionary constrained optimization and evaluate the impact of the designed visual properties on the generated spaceships. The offline generated spaceships are used as the initial population of an interactive evolution experiment in which players are asked to choose...

  7. Visual performance of four simultaneous-image multifocal contact lenses under dim and glare conditions.

    Science.gov (United States)

    García-Lázaro, Santiago; Ferrer-Blasco, Teresa; Madrid-Costa, David; Albarrán-Diego, César; Montés-Micó, Robert

    2015-01-01

    To assess and compare the effects of four simultaneous-image multifocal contact lenses (SIMCLs), and those with distant-vision-only contact lenses on visual performance in early presbyopes, under dim conditions, including the effects of induced glare. In this double-masked crossover study design, 28 presbyopic subjects aged 40 to 46 years were included. All participants were fitted with the four different SIMCLs (Air Optix Aqua Multifocal [AOAM; Alcon], PureVision Multifocal [PM; Bausch & Lomb], Acuvue Oasys for Presbyopia [AOP; Johnson & Johnson Vision], and Biofinity Multifocal [BM; CooperVision]) and with monofocal contact lenses (Air Optix Aqua, Alcon). After 1 month of daily contact lens wearing, each subject's binocular distance visual acuity (BDVA) and binocular distance contrast sensitivity (BDCS) were measured using the Functional Visual Analyzer (Stereo Optical Co., Inc.) under mesopic conditions (3 candela [cd]/m) both with no glare and under the 2 levels of induced glare: 1.0 lux (glare 1) and 28 lux (glare 2). Among the SIMCLs, in terms of BDVA, AOAM and PM outperformed BM and AOP. All contact lenses performed better at level without glare, followed by Glare 1, and with the worst results obtained under glare 2. Binocular distance contrast sensitivity revealed statistically significant differences for 12 cycles per degree (cpd). Among the SIMCLs, post hoc multiple comparison testing revealed that AOAM and PM provided the best BDCS at the three luminance levels. In both cases, BDVA and BDCS at 12 cpd, monofocal contact lenses outperformed all SIMCL ones at all lighting conditions. Air Optix Aqua Multifocal and PM provided better visual performance than BM and AOP for distance vision with low addition and under dim conditions, but they all provide worse performance than monofocal contact lenses.

  8. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  9. SonifEye: Sonification of Visual Information Using Physical Modeling Sound Synthesis.

    Science.gov (United States)

    Roodaki, Hessam; Navab, Navid; Eslami, Abouzar; Stapleton, Christopher; Navab, Nassir

    2017-11-01

    Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.

  10. Effects of Visual Communication Tool and Separable Status Display on Team Performance and Subjective Workload in Air Battle Management

    National Research Council Canada - National Science Library

    Schwartz, Daniel; Knott, Benjamin A; Galster, Scott M

    2008-01-01

    ... ambient cabin noise while performing several visual and manual tasks. The purpose of this study is to compare team performance and subjective workload on a simulated AWACS scenario, for two conditions of communication...

  11. Learning what to look in chest X-rays with a recurrent visual attention model

    OpenAIRE

    Ypsilantis, Petros-Pavlos; Montana, Giovanni

    2017-01-01

    X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnorma...

  12. Short-Term Visual Performance of Novel Extended Depth-of-Focus Contact Lenses.

    Science.gov (United States)

    Tilia, Daniel; Bakaraju, Ravi C; Chung, Jiyoon; Sha, Jennifer; Delaney, Shona; Munro, Anna; Thomas, Varghese; Ehrmann, Klaus; Holden, Brien A

    2016-04-01

    To compare the objective and subjective visual performance of a novel contact lens which extends depth of focus by deliberate manipulation of higher-order spherical aberrations and a commercially available zonal-refractive multifocal lens. A prospective, cross-over, randomized, single-masked, short-term clinical trial comprising 41 presbyopes (age 45 to 70 years) wearing novel Extended Depth of Focus lenses (EDOF) and ACUVUE OAYS for Presbyopia (AOP). Each design was assessed on different days with a minimum overnight wash-out. Objective measures comprised high-contrast visual acuity (HCVA, logMAR) at 6 m, 70 cm, 50 cm, and 40 cm; low-contrast visual acuity (LCVA, logMAR) and contrast sensitivity (log units) at 6 m; and stereopsis (seconds of arc) at 40 cm. HCVA at 70 cm, 50 cm, and 40 cm were measured as "comfortable acuity" rather than conventional resolution acuity. Subjective performance was assessed on a 1-10 numeric rating scale for clarity of vision and ghosting at distance, intermediate and near, overall vision satisfaction, ocular comfort, and lens purchase. Statistical analysis included repeated measures ANOVA and paired t tests. HCVA, clarity of vision, and ghosting with EDOF were significantly better than AOP (p lenses provide better intermediate and near vision performance in presbyopic participants without compromising distance vision.

  13. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    Science.gov (United States)

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  14. Modeling and visualizing uncertainty in digital thematic maps

    Science.gov (United States)

    Prasad, M. S. Ganesh; Arora, M. K.; Sajith, V. K.

    2006-12-01

    Spatial data in the form of thematic maps produced from remote sensing images are widely used in many application areas such as hydrology, geology, disaster management, forestry etc. These maps inherently contain uncertainties due to various reasons. The presence of uncertainty in thematic maps degrades the quality of maps and subsequently affects the decisions based on these data. Traditional way of quantifying quality is to compute the overall accuracy of the map, which however does not depict the spatial distribution of quality of whole map. It would be more expedient to use pixel-wise uncertainty as a means of quality indicator of a thematic map. This can be achieved through a number of mathematical tools based on well known theories of probability, geo-statistics, fuzzy sets and rough sets. Information theory and theory of evidence may also be adopted in this context. Nevertheless, there are several challenges involved in characterizing and providing uncertainty information to the users through these theories. The aim of this paper is to apprise the users of remote sensing about the uncertainties present in the thematic maps and to suggest ways to adequately deal with these uncertainties through proper modeling and visualization. Quantification and proper representation of uncertainty to the users may lead to increase in their confidence in using remote sensing derived products.

  15. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    Science.gov (United States)

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  16. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  17. Short Duration Bioastronautics Investigation 1904: Human Factors Assessment of Vibration Effects on Visual Performance during Launch

    Science.gov (United States)

    Thompson, Shelby; Holden, Kritina; Ebert, Douglas; Root, Phillip; Adelstein, Bernard; Jones, Jeffery

    2009-01-01

    The primary objective of the Short Duration Bioastronautics Investigation (SDBI) 1904 was to determine visual performance limits during Shuttle operational vibration and g-loads, specifically through the determination of minimal usable font sizes using Orion-type display formats. Currently there is little to no data available to quantify human visual performance under the extreme g- and vibration conditions of launch. Existing data on shuttle vibration magnitude and frequency is incomplete and does not address human visual performance. There have been anecdotal reports of performance decrements from shuttle crews, but no structured data have been collected. Previous work by NASA on the effects of vibration and linear g-loads on human performance was conducted during the Gemini era, but these experiments were performed using displays and controls that are dramatically different than current concepts being considered by the Constellation Program. Recently, three investigations of visual performance under vibration have been completed at NASA Ames Research Center: the first examining whole-body vibration, the second employing whole-body vibration coupled with a sustained g-load, and a third examining the effects of peak versus extended duration vibration. However, all of these studies were conducted using only a single x-axis direction (eyeballs in/out). Estimates of thrust oscillations from the Constellation Ares-I first stage are driving the need for realistic human performance requirements. SDBI 1904 was an opportunity to address the need for requirements by conducting a highly focused and applied evaluation in a relevant spaceflight environment. The SDBI was a companion effort to Detailed Test Objective (DTO) 695, which measured shuttle seat accelerations (vibration) during ascent. Data from the SDBI will serve an important role in interpreting the DTO vibration data. Both SDBI 1904 and DTO 695 were low impact with respect to flight resources, and combined, they

  18. The effect of context and audio-visual modality on emotions elicited by a musical performance

    Science.gov (United States)

    Coutinho, Eduardo; Scherer, Klaus R.

    2016-01-01

    In this work, we compared emotions induced by the same performance of Schubert Lieder during a live concert and in a laboratory viewing/listening setting to determine the extent to which laboratory research on affective reactions to music approximates real listening conditions in dedicated performances. We measured emotions experienced by volunteer members of an audience that attended a Lieder recital in a church (Context 1) and emotional reactions to an audio-video-recording of the same performance in a university lecture hall (Context 2). Three groups of participants were exposed to three presentation versions in Context 2: (1) an audio-visual recording, (2) an audio-only recording, and (3) a video-only recording. Participants achieved statistically higher levels of emotional convergence in the live performance than in the laboratory context, and the experience of particular emotions was determined by complex interactions between auditory and visual cues in the performance. This study demonstrates the contribution of the performance setting and the performers’ appearance and nonverbal expression to emotion induction by music, encouraging further systematic research into the factors involved. PMID:28781419

  19. Visual tracking speed is related to basketball-specific measures of performance in NBA players.

    Science.gov (United States)

    Mangine, Gerald T; Hoffman, Jay R; Wells, Adam J; Gonzalez, Adam M; Rogowski, Joseph P; Townsend, Jeremy R; Jajtner, Adam R; Beyer, Kyle S; Bohner, Jonathan D; Pruna, Gabriel J; Fragala, Maren S; Stout, Jeffrey R

    2014-09-01

    The purpose of this study was to determine the relationship between visual tracking speed (VTS) and reaction time (RT) on basketball-specific measures of performance. Twelve professional basketball players were tested before the 2012-13 season. Visual tracking speed was obtained from 1 core session (20 trials) of the multiple object tracking test, whereas RT was measured by fixed- and variable-region choice reaction tests, using a light-based testing device. Performance in VTS and RT was compared with basketball-specific measures of performance (assists [AST]; turnovers [TO]; assist-to-turnover ratio [AST/TO]; steals [STL]) during the regular basketball season. All performance measures were reported per 100 minutes played. Performance differences between backcourt (guards; n = 5) and frontcourt (forward/centers; n = 7) positions were also examined. Relationships were most likely present between VTS and AST (r = 0.78; p basketball-specific performance measures. Backcourt players were most likely to outperform frontcourt players in AST and very likely to do so for VTS, TO, and AST/TO. In conclusion, VTS seems to be related to a basketball player's ability to see and respond to various stimuli on the basketball court that results in more positive plays as reflected by greater number of AST and STL and lower turnovers.

  20. Modeling of Ship Propulsion Performance

    DEFF Research Database (Denmark)

    Pedersen, Benjamin Pjedsted; Larsen, Jan

    2009-01-01

    Full scale measurements of the propulsion power, ship speed, wind speed and direction, sea and air temperature, from four different loading conditions has been used to train a neural network for prediction of propulsion power. The network was able to predict the propulsion power with accuracy...... between 0.8-2.8%, which is about the same accuracy as for the measurements. The methods developed are intended to support the performance monitoring system SeaTrend® developed by FORCE Technology (FORCE (2008))....

  1. Crowded task performance in visually impaired children: magnifier versus large print.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Verezen, Cornelis A; Cillessen, Antonius H N; van Rens, Ger; Cox, Ralf F A

    2013-07-01

    This study compares the influence of two different types of magnification (magnifier versus large print) on crowded near vision task performance. Fifty-eight visually impaired children aged 4-8 years participated. Participants were divided in two groups, matched on age and near visual acuity (NVA): [1] the magnifier group (4-6 year olds [n = 13] and 7-8 year olds [n = 19]), and [2] the large print group (4-6 year olds [n = 12] and 7-8 year olds [n = 14]). At baseline, single and crowded Landolt C acuity were measured at 40 cm without magnification. Crowded near vision was measured again with magnification. A 90 mm diameter dome magnifier was chosen to avoid measuring the confounding effect of navigational skills. The magnifier provided 1.7× magnification and the large print provided 1.8× magnification. Performance measures: [1] NVA without magnification at 40 cm, [2] near vision with magnification, and [3] response time. Working distance was monitored. There was no difference in performance between the two types of magnification for the 4-6 year olds and the 7-8 year olds (p's = .291 and .246, respectively). Average NVA in the 4-6 year old group was 0.95 logMAR without and 0.42 logMAR with magnification (p children with a range of visual acuities on a crowded near vision task. Visually impaired children with stronger crowding effects showed larger improvements when working with magnification.

  2. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  3. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-03-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  4. A Neural Network Model of the Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Hansen, Lars Kai

    2009-01-01

    In this paper a neural network model of Visual Short-Term Memory (VSTM) is presented. The model links closely with Bundesen’s (1990) well-established mathematical theory of visual attention. We evaluate the model’s ability to fit experimental data from a classical whole and partial report study...

  5. Subtle alterations in memory systems and normal visual attention in the GAERS model of absence epilepsy.

    Science.gov (United States)

    Marques-Carneiro, J E; Faure, J-B; Barbelivien, A; Nehlig, A; Cassel, J-C

    2016-03-01

    Even if considered benign, absence epilepsy may alter memory and attention, sometimes subtly. Very little is known on behavior and cognitive functions in the Genetic Absence Epilepsy Rats from Strasbourg (GAERS) model of absence epilepsy. We focused on different memory systems and sustained visual attention, using Non Epileptic Controls (NECs) and Wistars as controls. A battery of cognitive/behavioral tests was used. The functionality of reference, working, and procedural memory was assessed in the Morris water maze (MWM), 8-arm radial maze, T-maze and/or double-H maze. Sustained visual attention was evaluated in the 5-choice serial reaction time task. In the MWM, GAERS showed delayed learning and less efficient working memory. In the 8-arm radial maze and T-maze tests, working memory performance was normal in GAERS, although most GAERS preferred an egocentric strategy (based on proprioceptive/kinesthetic information) to solve the task, but could efficiently shift to an allocentric strategy (based on spatial cues) after protocol alteration. Procedural memory and visual attention were mostly unimpaired. Absence epilepsy has been associated with some learning problems in children. In GAERS, the differences in water maze performance (slower learning of the reference memory task and weak impairment of working memory) and in radial arm maze strategies suggest that cognitive alterations may be subtle, task-specific, and that normal performance can be a matter of strategy adaptation. Altogether, these results strengthen the "face validity" of the GAERS model: in humans with absence epilepsy, cognitive alterations are not easily detectable, which is compatible with subtle deficits. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  7. Spherical model provides visual aid for cubic crystal study

    Science.gov (United States)

    Bacigalupi, R. J.; Spakowski, A. E.

    1965-01-01

    Transparent sphere of polymethylmethacrylate with major zones and poles of cubic crystals is used to make crystallographic visualizations and to interpret Laue X ray diffraction of single cubic crystals.

  8. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot.

    Science.gov (United States)

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user.

  9. Impaired Driving Performance as Evidence of a Magnocellular Deficit in Dyslexia and Visual Stress.

    Science.gov (United States)

    Fisher, Carri; Chekaluk, Eugene; Irwin, Julia

    2015-11-01

    High comorbidity and an overlap in symptomology have been demonstrated between dyslexia and visual stress. Several researchers have hypothesized an underlying or causal influence that may account for this relationship. The magnocellular theory of dyslexia proposes that a deficit in visuo-temporal processing can explain symptomology for both disorders. If the magnocellular theory holds true, individuals who experience symptomology for these disorders should show impairment on a visuo-temporal task, such as driving. Eighteen male participants formed the sample for this study. Self-report measures assessed dyslexia and visual stress symptomology as well as participant IQ. Participants completed a drive simulation in which errors in response to road signs were measured. Bivariate correlations revealed significant associations between scores on measures of dyslexia and visual stress. Results also demonstrated that self-reported symptomology predicts magnocellular impairment as measured by performance on a driving task. Results from this study suggest that a magnocellular deficit offers a likely explanation for individuals who report high symptomology across both conditions. While conclusions about the impact of these disorders on driving performance should not be derived from this research alone, this study provides a platform for the development of future research, utilizing a clinical population and on-road driving assessment techniques. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Directory of Open Access Journals (Sweden)

    Emmanuele eTidoni

    2014-06-01

    Full Text Available Advancement in brain computer interfaces (BCI technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid’s walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI’s user and help in the feeling of control over it. Our results shed light on the possibility to increase robot’s control through the combination of multisensory feedback to a BCI user.

  11. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  12. Testing predictive performance of binary choice models

    NARCIS (Netherlands)

    A.C.D. Donkers (Bas); B. Melenberg (Bertrand)

    2002-01-01

    textabstractBinary choice models occur frequently in economic modeling. A measure of the predictive performance of binary choice models that is often reported is the hit rate of a model. This paper develops a test for the outperformance of a predictor for binary outcomes over a naive prediction

  13. Retrospective cues based on object features improve visual working memory performance in older adults.

    Science.gov (United States)

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  14. Cactus and Visapult: A case study of ultra-high performance distributed visualization using connectionless protocols

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John; Bethel, E. Wes

    2002-05-07

    This past decade has seen rapid growth in the size, resolution, and complexity of Grand Challenge simulation codes. Many such problems still require interactive visualization tools to make sense of multi-terabyte data stores. Visapult is a parallel volume rendering tool that employs distributed components, latency tolerant algorithms, and high performance network I/O for effective remote visualization of massive datasets. In this paper we discuss using connectionless protocols to accelerate Visapult network I/O and interfacing Visapult to the Cactus General Relativity code to enable scalable remote monitoring and steering capabilities. With these modifications, network utilization has moved from 25 percent of line-rate using tuned multi-streamed TCP to sustaining 88 percent of line rate using the new UDP-based transport protocol.

  15. Visual attention and emotional memory: recall of aversive pictures is partially mediated by concurrent task performance.

    Science.gov (United States)

    Pottage, Claire L; Schaefer, Alexandre

    2012-02-01

    The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved

  16. Visual methodologies and participatory action research: Performing women's community-based health promotion in post-Katrina New Orleans.

    Science.gov (United States)

    Lykes, M Brinton; Scheib, Holly

    2016-01-01

    Recovery from disaster and displacement involves multiple challenges including accompanying survivors, documenting effects, and rethreading community. This paper demonstrates how African-American and Latina community health promoters and white university-based researchers engaged visual methodologies and participatory action research (photoPAR) as resources in cross-community praxis in the wake of Hurricane Katrina and the flooding of New Orleans. Visual techniques, including but not limited to photonarratives, facilitated the health promoters': (1) care for themselves and each other as survivors of and responders to the post-disaster context; (2) critical interrogation of New Orleans' entrenched pre- and post-Katrina structural racism as contributing to the racialised effects of and responses to Katrina; and (3) meaning-making and performances of women's community-based, cross-community health promotion within this post-disaster context. This feminist antiracist participatory action research project demonstrates how visual methodologies contributed to the co-researchers' cross-community self- and other caring, critical bifocality, and collaborative construction of a contextually and culturally responsive model for women's community-based health promotion post 'unnatural disaster'. Selected limitations as well as the potential for future cross-community antiracist feminist photoPAR in post-disaster contexts are discussed.

  17. Developmental changes in reading do not alter the development of visual processing skills: An application of explanatory item response models in grades K-2

    Directory of Open Access Journals (Sweden)

    Kristi L Santi

    2015-02-01

    Full Text Available Visual processing has been widely studied in regard to its impact on a students’ ability to read. A less researched area is the role of reading in the development of visual processing skills. A cohort-sequential, accelerated-longitudinal design was utilized with 932 kindergarten, first, and second grade students to examine the impact of reading acquisition on the processing of various types of visual discrimination and visual motor test items. Students were assessed four times per year on a variety of reading measures and reading precursors and two popular measures of visual processing over a three-year period. Explanatory item response models were used to examine the roles of person and item characteristics on changes in visual processing abilities and changes in item difficulties over time. Results showed different developmental patterns for five types of visual processing test items, but most importantly failed to show consistent effects of learning to read on changes in item difficulty. Thus, the present study failed to find support for the hypothesis that learning to read alters performance on measures of visual processing. Rather, visual processing and reading ability improved together over time with no evidence to suggest cross-domain influences from reading to visual processing. Results are discussed in the context of developmental theories of visual processing and brain-based research on the role of visual skills in learning to read.

  18. Task-Difficulty Homeostasis in Car Following Models: Experimental Validation Using Self-Paced Visual Occlusion

    National Research Council Canada - National Science Library

    Pekkanen, Jami; Lappi, Otto; Itkonen, Teemu H; Summala, Heikki

    2017-01-01

    ...) model, by dynamically changing driving parameters as function of driver capability. We examined assumptions of these models experimentally using a self-paced visual occlusion paradigm in a simulated car following task...

  19. Visinets: a web-based pathway modeling and dynamic visualization tool

    National Research Council Canada - National Science Library

    Spychala, Jozef; Spychala, Pawel; Gomez, Shawn; Weinreb, Gabriel E

    2015-01-01

    In this report we describe a novel graphically oriented method for pathway modeling and a software package that allows for both modeling and visualization of biological networks in a user-friendly format...

  20. Visinets: A Web-Based Pathway Modeling and Dynamic Visualization Tool: e0123773

    National Research Council Canada - National Science Library

    Jozef Spychala; Pawel Spychala; Shawn Gomez; Gabriel E Weinreb

    2015-01-01

      In this report we describe a novel graphically oriented method for pathway modeling and a software package that allows for both modeling and visualization of biological networks in a user-friendly format...

  1. Model Interpretation of Topological Spatial Analysis for the Visually Impaired (Blind Implemented in Google Maps

    Directory of Open Access Journals (Sweden)

    Marcelo Franco Porto

    2013-06-01

    Full Text Available The technological innovations promote the availability of geographic information on the Internet through Web GIS such as Google Earth and Google Maps. These systems contribute to the teaching and diffusion of geographical knowledge that instigates the recognition of the space we live in, leading to the creation of a spatial identity. In these products available on the Web, the interpretation and analysis of spatial information gives priority to one of the human senses: vision. Due to the fact that this representation of information is transmitted visually (image and vectors, a portion of the population is excluded from part of this knowledge because categories of analysis of geographic data such as borders, territory, and space can only be understood by people who can see. This paper deals with the development of a model of interpretation of topological spatial analysis based on the synthesis of voice and sounds that can be used by the visually impaired (blind.The implementation of a prototype in Google Maps and the usability tests performed are also examined. For the development work it was necessary to define the model of topological spatial analysis, focusing on computational implementation, which allows users to interpret the spatial relationships of regions (countries, states and municipalities, recognizing its limits, neighborhoods and extension beyond their own spatial relationships . With this goal in mind, several interface and usability guidelines were drawn up to be used by the visually impaired (blind. We conducted a detailed study of the Google Maps API (Application Programming Interface, which was the environment selected for prototype development, and studied the information available for the users of that system. The prototype was developed based on the synthesis of voice and sounds that implement the proposed model in C # language and in .NET environment. To measure the efficiency and effectiveness of the prototype, usability

  2. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  3. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren

    We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel...... to the data produces the following parameters: the minimum amount of information required for target identification (t0); the rate at which information is encoded, assuming an exponential function (v); the relative attentional weight to targets versus distractors (α); and the capacity of VSTM (K). TVA has...

  4. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  5. The impact of instructions on aircraft visual inspection performance : a first look at the overall results.

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Colin G. (State University of New York at Buffalo, Buffalo, NY); Spencer, Floyd Wayne; Wenner, Caren A.

    2003-07-01

    The purpose of this study was to investigate the impact of instructions on aircraft visual inspection performance and strategy. Forty-two inspectors from industry were asked to perform inspections of six areas of a Boeing 737. Six different instruction versions were developed for each inspection task, varying in the number and type of directed inspections. The amount of time spent inspecting, the number of calls made, and the number of the feedback calls detected all varied widely across the inspectors. However, inspectors who used instructions with a higher number of directed inspections referred to the instructions more often during and after the task, and found a higher percentage of a selected set of feedback cracks than inspectors using other instruction versions. This suggests that specific instructions can help overall inspection performance, not just performance on the defects specified. Further, instructions were shown to change the way an inspector approaches a task.

  6. Visual search performance of patients with vision impairment: Effect of JPEG image enhancement

    Science.gov (United States)

    Luo, Gang; Satgunam, PremNandhini; Peli, Eli

    2012-01-01

    Purpose To measure natural image search performance in patients with central vision impairment. To evaluate the performance effect for a JPEG based image enhancement technique using the visual search task. Method 150 JPEG images were presented on a touch screen monitor in either an enhanced or original version to 19 patients (visual acuity 0.4 to 1.2 logMAR, 6/15 to 6/90, 20/50 to 20/300) and 7 normally sighted controls (visual acuity −0.12 to 0.1 logMAR, 6/4.5 to 6/7.5, 20/15 to 20/25). Each image fell into one of three categories: faces, indoors, and collections. The enhancement was realized by moderately boosting a mid-range spatial frequency band in the discrete cosine transform (DCT) coefficients of the image luminance component. Participants pointed to an object in a picture that matched a given target displayed at the upper-left corner of the monitor. Search performance was quantified by the percentage of correct responses, the median search time of correct responses, and an “integrated performance” measure – the area under the curve of cumulative correct response rate over search time. Results Patients were able to perform the search tasks but their performance was substantially worse than the controls. Search performances for the 3 image categories were significantly different (p≤0.001) for all the participants, with searching for faces being the most difficult. When search time and correct response were analyzed separately, the effect of enhancement led to increase in one measure but decrease in another for many patients. Using the integrated performance, it was found that search performance declined with decrease in acuity (p=0.005). An improvement with enhancement was found mainly for the patients whose acuity ranged from 0.4 to 0.8 logMAR (6/15 to 6/38, 20/50 to 20/125). Enhancement conferred a small but significant improvement in integrated performance for indoor and collection images (p=0.025) in the patients. Conclusion Search performance

  7. Summary of photovoltaic system performance models

    Energy Technology Data Exchange (ETDEWEB)

    Smith, J. H.; Reiter, L. J.

    1984-01-15

    The purpose of this study is to provide a detailed overview of photovoltaics (PV) performance modeling capabilities that have been developed during recent years for analyzing PV system and component design and policy issues. A set of 10 performance models have been selected which span a representative range of capabilities from generalized first-order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Next, each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. Then each of the issues is discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. Finally, the models are grouped into categories to illustrate their purposes and perspectives.

  8. A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records.

    Science.gov (United States)

    Hsu, William; Arnold, Corey W; Taira, Ricky K

    2010-11-01

    The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large-often extraneous-amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient's record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients.

  9. Language, visuality, and the body. On the return of discourse in contemporary performance

    Directory of Open Access Journals (Sweden)

    Vangelis Athanassopoulos

    2013-12-01

    Full Text Available This article deals with the return of discourse in experimental performance-based artistic practices. By putting this return in a historical perspective, we wish to address the questions it raises on the relation between language, image, and the body, resituating the avant-garde heritage in a contemporary context where intermediality and transdisciplinarity tend to become the norm rather than the exception. The discussion of the status and function of discourse in this context calls on the field of theatre and its ambivalent role in modern aesthetics, both as a specifically determined artistic discipline, and as a blending of heterogeneous elements, which defy the assigned limitations of creative practice. The confrontation of Antonin Artaud's writings with Michael Fried's conception of theatricality aims to bring to the fore the cultural transformations and historical paradoxes which inform the shift from theatre to performance as an experimental field situated “between” the arts and embracing a wide range of practices, from visual arts to music and dance. The case of lecture-performance enables us to call attention to the internal contradictions of the “educational” interpretation of such experimental practices and their autonomization inside the limits of a specific artistic genre. The main argument is that, despite the plurality of its origins and its claims to intermediality and transdisciplinarity, lecture-performance as a genre is attracted by or gravitates around the extended field of the visual arts. By focusing on the work of Jerôme Bel, Noé Soulier, Giuseppe Chico, Barbara Matijevic, and Carole Douillard, we stress some of the ways contemporary discursive strategies enable to displace visual spectacle toward a conception of the body as the limit of signification.

  10. The relationship between visual function and performance in rifle shooting for athletes with vision impairment.

    Science.gov (United States)

    Myint, Joy; Latham, Keziah; Mann, David; Gomersall, Phil; Wilkins, Arnold J; Allen, Peter M

    2016-01-01

    Paralympic sports provide opportunities for those who have an impairment that might otherwise be a barrier to participation in regular sporting competition. Rifle shooting represents an ideal sport for persons with vision impairment (VI) because the direction of the rifle can be guided by auditory information when vision is impaired. However, it is unknown whether those with some remaining vision when shooting with auditory guidance would be at an advantage when compared with those with no vision at all. If this were the case then it would be necessary for those with and without remaining vision to compete in separate classes of competition. The associations between shooting performance and 3 measures of visual function thought important for shooting were assessed for 10 elite VI shooters currently classified as VI. A conventional audiogram was also obtained. The sample size, though small, included the majority of European VI shooters competing at this level. The relationships between visual functions and performance confirmed that individuals with residual vision had no advantage over those without vision when auditory guidance was available. Auditory function was within normal limits for age, and showed no relationship with performance. The findings suggest that rifle-shooting athletes with VI are able to use auditory information to overcome their impairment and optimise performance. Paralympic competition should be structured in a way that ensures that all shooters who qualify to compete in VI shooting participate within the same class irrespective of their level of VI.

  11. Stereoscopic Segmentation Cues Improve Visual Timing Performance in Spatiotemporally Cluttered Environments

    Directory of Open Access Journals (Sweden)

    Daniel Talbot

    2017-03-01

    Full Text Available Recently, Cass and Van der Burg demonstrated that temporal order judgment (TOJ precision could be profoundly impaired by the mere presence of dynamic visual clutter elsewhere in the visual field. This study examines whether presenting target and distractor objects in different depth planes might ameliorate this remote temporal camouflage (RTC effect. TOJ thresholds were measured under static and dynamic (flickering distractor conditions. In Experiment 1, targets were presented at zero, crossed, or uncrossed disparity, with distractors fixed at zero disparity. Thresholds were significantly elevated under dynamic compared with static contextual conditions, replicating the RTC effect. Crossed but not uncrossed disparity targets improved performance in dynamic distractor contexts, which otherwise produce substantial RTC. In Experiment 2, the assignment of disparity was reversed: targets fixed at zero disparity; distractors crossed, uncrossed, or zero. Under these conditions, thresholds improved significantly in the nonzero distractor disparity conditions. These results indicate that presenting target and distractor objects in different planes can significantly improve TOJ performance in dynamic conditions. In Experiment 3, targets were each presented with a different sign of disparity (e.g., one crossed and the other uncrossed, with no resulting performance benefits. Results suggest that disparity can be used to alleviate the performance-diminishing effects of RTC, but only if both targets constitute a single and unique disparity-defined surface.

  12. Visual function and performance with blue-light blocking filters in age-related macular degeneration.

    Science.gov (United States)

    Kiser, Ava K; Deschler, Emily K; Dagnelie, Gislin

    2008-08-01

    Some dispute has occurred over the use of blue-light-attenuating intraocular lenses in age-related macular degeneration (AMD), as they may reduce scotopic vision. This study aimed to determine if a blue blocking filter would affect performance during eye-hand coordination and mobility tasks in scotopic illumination, psychophysically measured scotopic sensitivity or colour discrimination in AMD patients. Scotopic measures performed with and without a blue-attenuating filter included a mobility obstacle course, manipulation of cylindrical blocks and a psychophysical dark-adapted full-field flash test. A navy and blue sock colour sorting task evaluated photopic colour discrimination. Subjects were 22 bilateral pseudophakes with early AMD and visual acuity >6/24. On average with the filter, there was a 13% increase in time during the block test. The differences in time and number of bumps with versus without the filter were not significant for the mobility course. Performance with and without the filter was well correlated for the blocks (r = 0.70), flash test (r = 0.83) and mobility (r = 0.66), and the regression slopes were not significantly different from unity. 77% of subjects misidentified at least one navy sock as black with the filter compared with 9% without, with a significant increase in such misidentifications with the filter. The difference in scotopic visual function or performance with versus without a blue-blocking filter most likely does not produce a clinically significant effect or risk; however, detection of navy colour may be impaired.

  13. Classification model and analysis on students' performance ...

    African Journals Online (AJOL)

    The purpose of this paper is to propose a classification model for classifying students' performance in SijilPelajaran ... along with the examination data.This research shows that first semester results can be used to identify students' performance. Keywords: educational data mining; classification model; feature selection ...

  14. Translation from UML to SPN Model: A Performance Modeling Framework

    OpenAIRE

    Khan, Razib Hayat; Heegaard, Poul E.

    2010-01-01

    International audience; This work focuses on the delineating a performance modeling framework for a communication system that proposes a translation process from high level UML notation to Stochastic Petri Net model (SPN) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and a...

  15. Hidden Markov Model Based Visual Perception Filtering in Robotic Soccer

    Directory of Open Access Journals (Sweden)

    Can Kavaklioglu

    2009-02-01

    Full Text Available Autonomous robots can initiate their mission plans only after gathering sufficient information about the environment. Therefore reliable perception information plays a major role in the overall success of an autonomous robot. The Hidden Markov Model based post-perception filtering module proposed in this paper aims to identify and remove spurious perception information in a given perception sequence using the generic metapose definition. This method allows representing uncertainty in more abstract terms compared to the common physical representations. Our experiments with the four legged AIBO robot indicated that the proposed module improved perception and localization performance significantly.

  16. Hidden Markov Model Based Visual Perception Filtering in Robotic Soccer

    Directory of Open Access Journals (Sweden)

    Can Kavaklioglu

    2009-12-01

    Full Text Available Autonomous robots can initiate their mission plans only after gathering sufficient information about the environment. Therefore reliable perception information plays a major role in the overall success of an autonomous robot. The Hidden Markov Model based post-perception filtering module proposed in this paper aims to identify and remove spurious perception information in a given perception sequence using the generic meta-pose definition. This method allows representing uncertainty in more abstract terms compared to the common physical representations. Our experiments with the four legged AIBO robot indicated that the proposed module improved perception and localization performance significantly.

  17. On Parsing Visual Sequences with the Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Harte Naomi

    2009-01-01

    Full Text Available Hidden Markov Models have been employed in many vision applications to model and identify events of interest. Their use is common in applications where HMMs are used to classify previously divided segments of video as one of a set of events being modelled. HMMs can also simultaneously segment and classify events within a continuous video, without the need for a separate first step to identify the start and end of the events. This is significantly less common. This paper is an exploration of the development of HMM frameworks for such complete event recognition. A review of how HMMs have been applied to both event classification and recognition is presented. The discussion evolves in parallel with an example of a real application in psychology for illustration. The complete videos depict sessions where candidates perform a number of different exercises under the instruction of a psychologist. The goal is to isolate portions of video containing just one of these exercises. The exercise involves rotating the head of a kneeling subject to the left, back to centre, to the right, to the centre, and repeating a number of times. By designing a HMM system to automatically isolate portions of video containing this exercise, issues such as the strategy of choice of event to be modelled, feature design and selection, as well as training and testing are reviewed. Thus this paper shows how HMMs can be more extensively applied in the domain of event recognition in video.

  18. Handwriting performance in the absence of visual control in writer's cramp patients: Initial observations

    Directory of Open Access Journals (Sweden)

    Losch Florian

    2006-04-01

    Full Text Available Abstract Background The present study was aimed at investigating the writing parameters of writer's cramp patients and control subjects during handwriting of a test sentence in the absence of visual control. Methods Eight right-handed patients with writer's cramp and eight healthy volunteers as age-matched control subjects participated in the study. The experimental task consisted in writing a test sentence repeatedly for fifty times on a pressure-sensitive digital board. The subject did not have visual control on his handwriting. The writing performance was stored on a PC and analyzed off-line. Results During handwriting all patients developed a typical dystonic limb posture and reported an increase in muscular tension along the experimental session. The patients were significantly slower than the controls, with lower mean vertical pressure of the pen tip on the paper and they could not reach the endmost letter of the sentence in the given time window. No other handwriting parameter differences were found between the two groups. Conclusion Our findings indicate that during writing in the absence of visual feedback writer's cramp patients are slower and could not reach the endmost letter of the test sentence, but their level of automatization is not impaired and writer's cramp handwriting parameters are similar to those of the controls except for even lower vertical pressure of the pen tip on the paper, which is probably due to a changed strategy in such experimental conditions.

  19. Navon's classical paradigm concerning local and global processing relates systematically to visual object classification performance.

    Science.gov (United States)

    Gerlach, Christian; Poirel, Nicolas

    2018-01-10

    Forty years ago David Navon tried to tackle a central problem in psychology concerning the time course of perceptual processing: Do we first see the details (local level) followed by the overall outlay (global level) or is it rather the other way around? He did this by developing a now classical paradigm involving the presentation of compound stimuli; large letters composed of smaller letters. Despite the usefulness of this paradigm it remains uncertain whether effects found with compound stimuli relate directly to visual object recognition. It does so because compound stimuli are not actual objects but rather formations of elements and because the elements that form the global shape of compound stimuli are not features of the global shape but rather objects in their own right. To examine the relationship between performance on Navon's paradigm and visual object processing we derived two indexes from Navon's paradigm that reflect different aspects of the relationship between global and local processing. We find that individual differences on these indexes can explain a considerable amount of variance in two standard object classification paradigms; object decision and superordinate categorization, suggesting that Navon's paradigm does relate to visual object processing.

  20. Visual aesthetic of Petta Puang theater group performance in South Sulawesi

    Directory of Open Access Journals (Sweden)

    Andi Baetal Mukadas

    2017-06-01

    Full Text Available This study aims to provide a description of the visual aesthetics contained in the show Puppet Theater Petta Puang in South Sulawesi in connection with the network of symbolic meaning inherent in it. The method used is descriptive-interpretivime method symbolic, with data collection techniques through direct observation, interviews, and documentation. The result of the research shows that in the performance of Petta Puang Puppet Theater there is a visual aesthetic that characterizes the main character «Petta Puang» in every appearance that is jas tutup, songkok guru (songkok to Bone, and lipa ‘sabbe’. Some visual aesthetics have symbolic meaning directly related to the socio-cultural values of the people of South Sulawesi. Lipa ‘sabbe (silk sarong is a Bugis sarong which has a fine texture as a representation of the tenderness and social politeness of Bugis tribe, while vertical and horizontal lines are markers of human relationship with God and human relationships in the social system. Jas Tutup originally consists of two colors, namely black and white. Two elements of color is a neutral color that confirms the impression of depth and sanctity that became the patron of Bugis tribe values.

  1. East China Sea Storm Surge Modeling and Visualization System: The Typhoon Soulik Case

    Directory of Open Access Journals (Sweden)

    Zengan Deng

    2014-01-01

    Full Text Available East China Sea (ECS Storm Surge Modeling System (ESSMS is developed based on Regional Ocean Modeling System (ROMS. Case simulation is performed on the Typhoon Soulik, which landed on the coastal region of Fujian Province, China, at 6 pm of July 13, 2013. Modeling results show that the maximum tide level happened at 6 pm, which was also the landing time of Soulik. This accordance may lead to significant storm surge and water level rise in the coastal region. The water level variation induced by high winds of Soulik ranges from −0.1 to 0.15 m. Water level generally increases near the landing place, in particular on the left hand side of the typhoon track. It is calculated that 0.15 m water level rise in this region can cause a submerge increase of ~0.2 km2, which could be catastrophic to the coastal environment and the living. Additionally, a Globe Visualization System (GVS is realized on the basis of World Wind to better provide users with the typhoon/storm surge information. The main functions of GVS include data indexing, browsing, analyzing, and visualization. GVS is capable of facilitating the precaution and mitigation of typhoon/storm surge in ESC in combination with ESSMS.

  2. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  3. DBSolve Optimum: a software package for kinetic modeling which allows dynamic visualization of simulation results

    Directory of Open Access Journals (Sweden)

    Gizzatkulov Nail M

    2010-08-01

    Full Text Available Abstract Background Systems biology research and applications require creation, validation, extensive usage of mathematical models and visualization of simulation results by end-users. Our goal is to develop novel method for visualization of simulation results and implement it in simulation software package equipped with the sophisticated mathematical and computational techniques for model development, verification and parameter fitting. Results We present mathematical simulation workbench DBSolve Optimum which is significantly improved and extended successor of well known simulation software DBSolve5. Concept of "dynamic visualization" of simulation results has been developed and implemented in DBSolve Optimum. In framework of the concept graphical objects representing metabolite concentrations and reactions change their volume and shape in accordance to simulation results. This technique is applied to visualize both kinetic response of the model and dependence of its steady state on parameter. The use of the dynamic visualization is illustrated with kinetic model of the Krebs cycle. Conclusion DBSolve Optimum is a user friendly simulation software package that enables to simplify the construction, verification, analysis and visualization of kinetic models. Dynamic visualization tool implemented in the software allows user to animate simulation results and, thereby, present them in more comprehensible mode. DBSolve Optimum and built-in dynamic visualization module is free for both academic and commercial use. It can be downloaded directly from http://www.insysbio.ru.

  4. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  5. A New Conceptual Model for Business Ecosystem Visualization and Analysis

    Directory of Open Access Journals (Sweden)

    Luiz Felipe Hupsel Vaz

    2013-01-01

    Full Text Available This study has the objective of plotting the effects of network externalities and superstar software for the visualization and analysis of industry ecosystems. The output is made possible by gathering sales from a tracking website, associating each sale to a single consumer and by using a network visualization software. The result is a graph that shows strategic positioning of publishers and platforms, serving as a strategic tool for both academics and professionals. The approach is scalable to other industries and can be used to support analysis on mergers, acquisitions and alliances.

  6. Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues

    Science.gov (United States)

    Parikh, N.; Itti, L.; Humayun, M.; Weiland, J.

    2013-04-01

    Objective. The objective of this paper is to evaluate the benefits provided by a saliency-based cueing algorithm to normally sighted volunteers performing mobility and search tasks using simulated prosthetic vision. Approach. Human subjects performed mobility and search tasks using simulated prosthetic vision. A saliency algorithm based on primate vision was used to detect regions of interest (ROI) in an image. Subjects were cued to look toward the directions of these ROI using visual cues superimposed on the simulated prosthetic vision. Mobility tasks required the subjects to navigate through a corridor, avoid obstacles and locate a target at the end of the course. Two search task experiments involved finding objects on a tabletop under different conditions. Subjects were required to perform tasks with and without any help from cues. Results. Head movements, time to task completion and number of errors were all significantly reduced in search tasks when subjects used the cueing algorithm. For the mobility task, head movements and number of contacts with objects were significantly reduced when subjects used cues, whereas time was significantly reduced when no cues were used. The most significant benefit from cues appears to be in search tasks and when navigating unfamiliar environments. Significance. The results from the study show that visually impaired people and retinal prosthesis implantees may benefit from computer vision algorithms that detect important objects in their environment, particularly when they are in a new environment.

  7. Enhancing reading performance through action video games: the role of visual attention span.

    Science.gov (United States)

    Antzaka, A; Lallier, M; Meyer, S; Diard, J; Carreiras, M; Valdois, S

    2017-11-06

    Recent studies reported that Action Video Game-AVG training improves not only certain attentional components, but also reading fluency in children with dyslexia. We aimed to investigate the shared attentional components of AVG playing and reading, by studying whether the Visual Attention (VA) span, a component of visual attention that has previously been linked to both reading development and dyslexia, is improved in frequent players of AVGs. Thirty-six French fluent adult readers, matched on chronological age and text reading proficiency, composed two groups: frequent AVG players and non-players. Participants performed behavioural tasks measuring the VA span, and a challenging reading task (reading of briefly presented pseudo-words). AVG players performed better on both tasks and performance on these tasks was correlated. These results further support the transfer of the attentional benefits of playing AVGs to reading, and indicate that the VA span could be a core component mediating this transfer. The correlation between VA span and pseudo-word reading also supports the involvement of VA span even in adult reading. Future studies could combine VA span training with defining features of AVGs, in order to build a new generation of remediation software.

  8. An Investigation into how Character’s Visual Appearance Affects Gamer Performance

    OpenAIRE

    Leppänen, Janne

    2017-01-01

    The purpose of this thesis was to investigate how a gamer perceives the playable character’s visual looks in an FPS (first person shooter) type of computer game. The main focus was on how this affects immersion and especially the gamer performance while playing the game. The goal was also to explain the concept of enclothed cognition and its use in the thesis to support the research. A series of tests was set up for multiple test subjects with the purpose of proving that the playable...

  9. Modelling fast forms of visual neural plasticity using a modified second-order motion energy model.

    Science.gov (United States)

    Pavan, Andrea; Contillo, Adriano; Mather, George

    2014-12-01

    The Adelson-Bergen motion energy sensor is well established as the leading model of low-level visual motion sensing in human vision. However, the standard model cannot predict adaptation effects in motion perception. A previous paper Pavan et al.(Journal of Vision 10:1-17, 2013) presented an extension to the model which uses a first-order RC gain-control circuit (leaky integrator) to implement adaptation effects which can span many seconds, and showed that the extended model's output is consistent with psychophysical data on the classic motion after-effect. Recent psychophysical research has reported adaptation over much shorter time periods, spanning just a few hundred milliseconds. The present paper further extends the sensor model to implement rapid adaptation, by adding a second-order RC circuit which causes the sensor to require a finite amount of time to react to a sudden change in stimulation. The output of the new sensor accounts accurately for psychophysical data on rapid forms of facilitation (rapid visual motion priming, rVMP) and suppression (rapid motion after-effect, rMAE). Changes in natural scene content occur over multiple time scales, and multi-stage leaky integrators of the kind proposed here offer a computational scheme for modelling adaptation over multiple time scales.

  10. A Probabilistic Palimpsest Model of Visual Short-term Memory

    Science.gov (United States)

    Matthey, Loic; Bays, Paul M.; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204

  11. Computerized evaluation of deambulatory pattern before and after visual rehabilitation treatment performed with biofeedback in visually impaired patients suffering from macular degeneration

    Directory of Open Access Journals (Sweden)

    Fernanda Pacella

    2016-09-01

    Full Text Available Aims: The aim of this study was double: the primary endpoint was to evaluate the efficacy of visual rehabilitation of visually impaired patients with macular degeneration (AMD. The secondary endpoint was to assess the effect of rehabilitation treatment on the ambulatory pattern using a computerized evaluation of walking, focusing the attention on space-time parameters that are influenced in patients with visual impairment. Methods: 10 patients with AMD were enrolled, 6 males and 4 females, and examined 15 eyes, at Department of Sense Organs, Faculty of Medicine and Dentistry Sapienza University of Rome, Italy. Visual rehabilitation was carried out with the use of a microperimeter MP1 using the examination of biofeedback. Patients are asked to move their eyes in coordination with an audible feedback that alerts the patient when he is setting properly the fixation target previously selected. All patients were subjected to 10 sessions lasting 15 minutes each for each eye, 1 time per week. The best corrected visual acuity (BCVA was assessed by far with the ETDRS optotype IN LOG MAR, and by close to 25 cm by adding + 4 ball (addition to near to the BCVA. For each eye the PB ( print body on the distance of 25 cm was measured; It fixation stability for 30 seconds was examined by microperimeter. Gait Analysis was performed with system ELITE BTS SpA (Milan, Italy. Results: At the end of the rehabilitation treatment with biofeedback it was found a marked improvement in BCVA. The BCVA before the rehabilitation treatment was ETDRS 12 LETTERS = 0.86 logMAR. At the end of the visual rehabilitation 16 LETTERS = 0.78 logMAR. The near visual acuity presented a decrease of the printer body measurement (PB and a statistically significant improvement in the fixation stability. Analysis of the spatial and temporal parameters of gait cycle, aimed at assessing the global aspects of gait (speed, rhythm, symmetry, fluidity, dynamic balance showed no significant changes

  12. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  13. Testing a Conceptual Change Model Framework for Visual Data

    Science.gov (United States)

    Finson, Kevin D.; Pedersen, Jon E.

    2015-01-01

    An emergent data analysis technique was employed to test the veracity of a conceptual framework constructed around visual data use and instruction in science classrooms. The framework incorporated all five key components Vosniadou (2007a, 2007b) described as existing in a learner's schema: framework theory, presuppositions, conceptual domains,…

  14. Relevance Theory as model for analysing visual and multimodal communication

    NARCIS (Netherlands)

    Forceville, C.; Machin, D.

    2014-01-01

    Elaborating on my earlier work (Forceville 1996: chapter 5, 2005, 2009; see also Yus 2008), I will here sketch how discussions of visual and multimodal discourse can be embedded in a more general theory of communication and cognition: Sperber and Wilson’s Relevance Theory/RT (Sperber and Wilson

  15. Adaptive hybrid likelihood model for visual tracking based on Gaussian particle filter

    Science.gov (United States)

    Wang, Yong; Tan, Yihua; Tian, Jinwen

    2010-07-01

    We present a new scheme based on multiple-cue integration for visual tracking within a Gaussian particle filter framework. The proposed method integrates the color, shape, and texture cues of an object to construct a hybrid likelihood model. During the measurement step, the likelihood model can be switched adaptively according to environmental changes, which improves the object representation to deal with the complex disturbances, such as appearance changes, partial occlusions, and significant clutter. Moreover, the confidence weights of the cues are adjusted online through the estimation using a particle filter, which ensures the tracking accuracy and reliability. Experiments are conducted on several real video sequences, and the results demonstrate that the proposed method can effectively track objects in complex scenarios. Compared with previous similar approaches through some quantitative and qualitative evaluations, the proposed method performs better in terms of tracking robustness and precision.

  16. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  17. Spatial Visualization Ability and Impact of Drafting Models: A Quasi Experimental Study

    Science.gov (United States)

    Katsioloudis, Petros J.; Jovanovic, Vukica

    2014-01-01

    A quasi experimental study was done to determine significant positive effects among three different types of visual models and to identify whether any individual type or combination contributed towards a positive increase of spatial visualization ability for students in engineering technology courses. In particular, the study compared the use of…

  18. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    Science.gov (United States)

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  19. [Analytical model of readaptation of the human visual system after light exposure].

    Science.gov (United States)

    Naumov, N D

    2003-01-01

    The process of readaptation of the human visual system is considered as the behavior of a follow-up system, with the brightness of the background being the control signal. The times of recovery of visual acuity calculated by the model are compared with the experimental data.

  20. The Effects of Solid Modeling and Visualization on Technical Problem Solving

    Science.gov (United States)

    Koch, Douglas

    2011-01-01

    The purpose of this study was to determine whether or not the use of solid modeling software increases participants' success in solving a specified technical problem and how visualization affects their ability to solve a technical problem. Specifically, the study sought to determine if (a) students' visualization skills affect their problem…

  1. Blur, eye movements and performance on a driving visual recognition slide test.

    Science.gov (United States)

    Lee, Samantha Sze-Yee; Wood, Joanne M; Black, Alexander A

    2015-09-01

    Optical blur and ageing are known to affect driving performance but their effects on drivers' eye movements are poorly understood. This study examined the effects of optical blur and age on eye movement patterns and performance on the DriveSafe slide recognition test which is purported to predict fitness to drive. Twenty young (27.1 ± 4.6 years) and 20 older (73.3 ± 5.7 years) visually normal drivers performed the DriveSafe under two visual conditions: best-corrected vision and with +2.00 DS blur. The DriveSafe is a Visual Recognition Slide Test that consists of brief presentations of static, real-world driving scenes containing different road users (pedestrians, bicycles and vehicles). Participants reported the types, relative positions and direction of travel of the road users in each image; the score was the number of correctly reported items (maximum score of 128). Eye movements were recorded while participants performed the DriveSafe test using a Tobii TX300 eye tracking system. There was a significant main effect of blur on DriveSafe scores (best-corrected: 114.9 vs blur: 93.2; p eye movement patterns, blur significantly reduced the number of fixations on road users (best-corrected: 5.1 vs blur: 4.5; p eye movements was also found where older drivers made smaller saccades than the young drivers (6.7° vs 7.4°; p < 0.001). Blur reduced DriveSafe scores for both age groups and this effect was greater for the young drivers. The decrease in number of fixations and fixation duration on road users, as well as the reduction in saccade amplitudes under the blurred condition, highlight the difficulty experienced in performing the task in the presence of optical blur, which suggests that uncorrected refractive errors may have a detrimental impact on aspects of driving performance. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  2. Differences in Performance of ADHD Children on a Visual and Auditory Continuous Performance Test according to IQ.

    Science.gov (United States)

    Park, Min-Hyeon; Kweon, Yong Sil; Lee, Soo Jung; Park, E-Jin; Lee, Chul; Lee, Chang-Uk

    2011-09-01

    Continuous performance tests (CPTs) are frequently used in clinical practice to assess the attentiveness of ADHD children. Although most CPTs do not categorize T scores by intelligence, there is great diversity of opinion regarding the interrelation between intelligence and CPT performance. This study aimed to determine if ADHD children with superior IQs would perform better than ADHD children with average IQs. Additionally, we aimed to examine the need for CPTs' to categorize according to IQ. Participants were 326 outpatients, aged 5-15 years, diagnosed with ADHD. All participants completed the Wechsler Intelligence Scale for Children-Revised and a CPT. After excluding those who meet exclusion criteria, we had 266 patients for our analysis. The "Highly Intelligent Group" (HIG), patients with IQs 120 and above, performed superiorly to the "Normally Intelligent Group" (NIG) patients, with IQs between 70 and 120, with regard to omission and commission errors on the visual-auditory CPT, even after controlling for age and gender. The HIG had higher ratios of subjects with T scores based on IQ, as well as on age and gender. Moreover, clinicians need to pay attention to the effect of IQ in interpreting CPT scores; that is, a "normal" score does not rule out a diagnosis of ADHD.

  3. Visual Recognition Memory Test Performance was Improved in Older Adults by Extending Encoding Time and Repeating Test Trials

    National Research Council Canada - National Science Library

    Theppitak, Chalermsiri; Lai, Viet; Izumi, Hiroyuki; Higuchi, Yoshiyuki; Kumudini, Ganga; Movahed, Mehrnoosh; Kumashiro, Masaharu; Fujiki, Nobuhiro

    2014-01-01

    Objectives: The aim of this study was to investigate whether the combination of extension of the encoding time and repetition of a test trial would improve the visual recognition memory performance in older adults. Methods...

  4. A Replication of “Motor and Visual Codes Interact to Facilitate Visuospatial Memory Performance (2007; Experiment 1)”

    OpenAIRE

    Vanessa P. Rowe; Sébastien Lagacé; Katherine Guérard

    2015-01-01

    The present study is a replication of Chum, Bekkering, Dodd, and Pratt (2007). Motor and visual codes interact to facilitate visuospatial memory performance. Psychonomic Bulletin & Review, 14, 1189-1193.

  5. A Replication of “Motor and Visual Codes Interact to Facilitate Visuospatial Memory Performance (2007; Experiment 1”

    Directory of Open Access Journals (Sweden)

    Vanessa P. Rowe

    2015-02-01

    Full Text Available The present study is a replication of Chum, Bekkering, Dodd, and Pratt (2007. Motor and visual codes interact to facilitate visuospatial memory performance. Psychonomic Bulletin & Review, 14, 1189-1193.

  6. Optical quality and visual performance with customised soft contact lenses for keratoconus.

    Science.gov (United States)

    Jinabhai, Amit; O'Donnell, Clare; Tromans, Cindy; Radhakrishnan, Hema

    2014-09-01

    This study investigated how aberration-controlling, customised soft contact lenses corrected higher-order ocular aberrations and visual performance in keratoconic patients compared to other forms of refractive correction (spectacles and rigid gas-permeable lenses). Twenty-two patients (16 rigid gas-permeable contact lens wearers and six spectacle wearers) were fitted with standard toric soft lenses and customised lenses (designed to correct 3rd-order coma aberrations). In the rigid gas-permeable lens-wearing patients, ocular aberrations were measured without lenses, with the patient's habitual lenses and with the study lenses (Hartmann-Shack aberrometry). In the spectacle-wearing patients, ocular aberrations were measured both with and without the study lenses. LogMAR visual acuity (high-contrast and low-contrast) was evaluated with the patient wearing their habitual correction (of either spectacles or rigid gas-permeable contact lenses) and with the study lenses. In the contact lens wearers, the habitual rigid gas-permeable lenses and customised lenses provided significant reductions in 3rd-order coma root-mean-square (RMS) error, 3rd-order RMS and higher-order RMS error (p ≤ 0.004). In the spectacle wearers, the standard toric lenses and customised lenses significantly reduced 3rd-order RMS and higher-order RMS errors (p ≤ 0.005). The spectacle wearers showed no significant differences in visual performance measured between their habitual spectacles and the study lenses. However, in the contact lens wearers, the habitual rigid gas-permeable lenses and standard toric lenses provided significantly better high-contrast acuities compared to the customised lenses (p ≤ 0.006). The customised lenses provided substantial reductions in ocular aberrations in these keratoconic patients; however, the poor visual performances achieved with these lenses are most likely to be due to small, on-eye lens decentrations. © 2014 The Authors Ophthalmic & Physiological

  7. Adapting the Theory of Visual Attention (TVA) to model auditory attention

    DEFF Research Database (Denmark)

    Roberts, Katherine L.; Andersen, Tobias; Kyllingsbæk, Søren

    Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual...... the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend...

  8. Impact of low vision care on reading performance in children with multiple disabilities and visual impairment

    Directory of Open Access Journals (Sweden)

    Krishna Kumar Ramani

    2014-01-01

    Full Text Available Background: Lack of evidence in literature to show low vision care enhances the reading performance in children with Multiple Disabilities and Visual Impairment (MDVI. Aim: To evaluate the effectiveness of Low Vision Care intervention on the reading performance of children with MDVI. Materials and Methods: Three subjects who were diagnosed to have cerebral palsy and visual impairment, studying in a special school were recruited for the study. All of them underwent detailed eye examination and low vision care evaluation at a tertiary eye care hospital. A single subject multiple baseline (study design was adopted and the study period was 16 weeks. The reading performance (reading speed, reading accuracy, reading fluency was evaluated during the baseline phase and the intervention phase. The median of all the reading parameters for each week was noted. The trend of the reading performance was graphically represented in both the phases. Results: Reading speed increased by 37 Word per minute, 37 Letters per minute and 5 letters per minute for the subject 1, 2 and 3 respectively after the intervention. Reading accuracy was 84%, 91% and 86.4% at the end of the baseline period and 98.7%, 98.4% and 99% at the end of 16 weeks for subject 1, 2 and 3 respectively. Average reading fluency score was 8.3, 7.1 and 5.5 in the baseline period and 10.2, 10.2 and 8.7 in the intervention period. Conclusion: This study shows evidence of noticeable improvement in reading performance of children with MDVI using a novel study design.

  9. Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations

    KAUST Repository

    Landge, A. G.

    2012-12-01

    The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D-s performance on an IBM Blue Gene/P system. © 1995-2012 IEEE.

  10. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    Science.gov (United States)

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. EVALUATING A SEGMENTATION-RESISTANT CAPTCHA INSPIRED BY THE HUMAN VISUAL SYSTEM MODEL

    Directory of Open Access Journals (Sweden)

    Imran Moez Khan

    2011-10-01

    Full Text Available Visual CAPTCHAs are widely used these days on the Internet as a means of distinguishing between humans and computers. They help protect servers from being flooded by requests from malicious scripts. However, they are not very secure. Numerous image processing algorithms are able to discern the characters used in the CAPTCHAs. It has been suggested that CAPTCHAs can be made more secure if they are distorted in ways that makes segmentation difficult. However, out of all the reviewed distortions present in current CAPTCHAs there are none that allow for a high level of segmentation difficulty. Furthermore, CAPTCHAs also need to be used by humans who may not find certain distortions tolerable. Thus, the problem of selecting a good distortion becomes a tradeoff between user acceptability and computer solvability. It is hypothesized in this paper that rather than use low-level image distortions, optical distortions based on the Gestalt laws of perception that govern human visual system models should be applied. These distortions would ensure widespread user acceptability (as they are based on the internal workings of the HVS, and be very difficult for computers to solve (as HVS perception models have been difficult to implement in computers. This paper aims to explore the feasibility of employing Gestalt-inspired distortion in CAPTCHAs by first implementing a CAPTCHA cracker and then evaluating the performance of some manually generated Gestalt CAPTCHA’s against some existing CAPTCHAs.

  12. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    Science.gov (United States)

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  13. Visual Assessment on Coastal Cruise Tourism: A Preliminary Planning Using Importance Performance Analysis

    Science.gov (United States)

    Trisutomo, S.

    2017-07-01

    Importance-Performance Analysis (IPA) has been widely applied in many cases. In this research, IPA was applied to measure perceive on coastal tourism objects and its possibility to be developed as coastal cruise tourism in Makassar. Three objects, i.e. Akkarena recreational site, Losari public space at waterfront, and Paotere traditional Phinisi ships port, were selected and assessed visually from water area by a group of purposive resource persons. The importance and performance of 10 attributes of each site were scored using Likert scale from 1 to 5. Data were processed by SPSS-21 than resulted Cartesian graph which the scores were divided in four quadrants: Quadrant I concentric here, Quadrant II keep up the good work, Quadrant III low priority, and Quadrant IV possible overkill. The attributes in each quadrant could be considered as the platform for preliminary planning of coastal cruise tour in Makassar

  14. Translation from UML to Markov Model: A Performance Modeling Framework

    Science.gov (United States)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  15. Next Generation, 4-D Distributed Modeling and Visualization of Battlefield

    Science.gov (United States)

    2006-07-14

    is to acquire both 3D geometry and visual appearance of a scene over time, and thus essentially record a 4D movie which could be viewed afterwards...Panoramic Photogrammetry Workshop, Berlin, Germany, 2005. 82. Srikumar Ramalingam, Peter Sturm, and Suresh K. Lodha, "Towards Complete Generic Camera...ASPRS (American Society for Photogrammetry and Remote Sensing) Conference, Baltimore, Maryland, March 2005. 84. Suresh K. Lodha, Andrew Ames, Adam

  16. The role of visual and spatial working memory in forming mental models derived from survey and route descriptions.

    Science.gov (United States)

    Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie

    2017-05-01

    This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.

  17. Attentional and visual demands for sprint performance in non-fatigued and fatigued conditions : reliability of a repeated sprint test

    NARCIS (Netherlands)

    Reininga, Inge H. F.; Lemmink, Koen A. P. M.; Diercks, Ron L.; Buizer, Arina T.; Stevens, Martin

    2010-01-01

    Background: Physical performance measures are widely used to assess physical function, providing information about physiological and biomechanical aspects of motor performance. However they do not provide insight into the attentional and visual demands for motor performance. A figure-of-eight sprint

  18. Performance of the Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, Patrick H [ORNL; Craig, Anthony [National Center for Atmospheric Research (NCAR); Dennis, John [National Center for Atmospheric Research (NCAR); Mirin, Arthur A. [Lawrence Livermore National Laboratory (LLNL); Taylor, Mark [Sandia National Laboratories (SNL); Vertenstein, Mariana [National Center for Atmospheric Research (NCAR)

    2011-01-01

    The Community Earth System Model (CESM), released in June 2010, incorporates new physical process and new numerical algorithm options, significantly enhancing simulation capabilities over its predecessor, the June 2004 release of the Community Climate System Model. CESM also includes enhanced performance tuning options and performance portability capabilities. This paper describes the performance engineering aspects of the CESM and reports performance and performance scaling on both the Cray XT5 and the IBM BG/P for four representative production simulations, varying both problem size and enabled physical processes. The paper also describes preliminary performance results for high resolution simulations using over 200,000 processor cores, indicating the promise of ongoing work in numerical algorithms and where further work is required.

  19. Solid-state lighting for the International Space Station: Tests of visual performance and melatonin regulation

    Science.gov (United States)

    Brainard, George C.; Coyle, William; Ayers, Melissa; Kemp, John; Warfield, Benjamin; Maida, James; Bowen, Charles; Bernecker, Craig; Lockley, Steven W.; Hanifin, John P.

    2013-11-01

    The International Space Station (ISS) uses General Luminaire Assemblies (GLAs) that house fluorescent lamps for illuminating the astronauts' working and living environments. Solid-state light emitting diodes (LEDs) are attractive candidates for replacing the GLAs on the ISS. The advantages of LEDs over conventional fluorescent light sources include lower up-mass, power consumption and heat generation, as well as fewer toxic materials, greater resistance to damage and long lamp life. A prototype Solid-State Lighting Assembly (SSLA) was developed and successfully installed on the ISS. The broad aim of the ongoing work is to test light emitted by prototype SSLAs for supporting astronaut vision and assessing neuroendocrine, circadian, neurobehavioral and sleep effects. Three completed ground-based studies are presented here including experiments on visual performance, color discrimination, and acute plasma melatonin suppression in cohorts of healthy, human subjects under different SSLA light exposure conditions within a high-fidelity replica of the ISS Crew Quarters (CQ). All visual tests were done under indirect daylight at 201 lx, fluorescent room light at 531 lx and 4870 K SSLA light in the CQ at 1266 lx. Visual performance was assessed with numerical verification tests (NVT). NVT data show that there are no significant differences in score (F=0.73, p=0.48) or time (F=0.14, p=0.87) for subjects performing five contrast tests (10%-100%). Color discrimination was assessed with Farnsworth-Munsell 100 Hue tests (FM-100). The FM-100 data showed no significant differences (F=0.01, p=0.99) in color discrimination for indirect daylight, fluorescent room light and 4870 K SSLA light in the CQ. Plasma melatonin suppression data show that there are significant differences (F=29.61, psleep-wake patterns. These studies will help determine if SSLA lighting can be used both to support astronaut vision and serve as an in-flight countermeasure for circadian desynchrony, sleep

  20. Mirror visual feedback-induced performance improvement and the influence of hand dominance

    Directory of Open Access Journals (Sweden)

    Viola eRjosk

    2016-01-01

    Full Text Available Mirror Visual Feedback (MVF is a promising technique in clinical settings that can be used to augment performance of an untrained limb. Several studies with healthy volunteers and patients using transcranial magnetic stimulation (TMS or functional magnetic resonance imaging (fMRI indicate that functional alterations within primary motor cortex (M1 might be one candidate mechanism that could explain MVF-induced changes in behavior. Until now, most studies have used MVF to improve performance of the non-dominant hand. The question remains if the behavioural effect of MVF differs according to hand dominance. Here, we conducted a study with two groups of young, healthy right-handed volunteers who performed a complex ball-rotation task while receiving MVF of the dominant (n = 16, group 1, MVFDH or non-dominant hand (n = 16, group 2, MVFNDH. We found no significant differences in baseline performance of the untrained hand between groups before MVF was applied. Furthermore, there was no significant difference in the amount of performance improvement between MVFDH and MVFNDH indicating that the outcome of MVF seems not to be influenced by hand dominance. Thus our findings might have important implications in neurorehabilitation suggesting that patients suffering from unilateral motor impairments might benefit from MVF regardless of the dominance of the affected limb.

  1. Combined effects of attention and motivation on visual task performance: transient and sustained motivational effects.

    Science.gov (United States)

    Engelmann, Jan B; Damaraju, Eswar; Padmala, Srikanth; Pessoa, Luiz

    2009-01-01

    We investigated how the brain integrates motivational and attentional signals by using a neuroimaging paradigm that provided separate estimates for transient cue- and target-related signals, in addition to sustained block-related responses. Participants performed a Posner-type task in which an endogenous cue predicted target location on 70% of trials, while motivation was manipulated by varying magnitude and valence of a cash incentive linked to task performance. Our findings revealed increased detection performance (d') as a function of incentive value. In parallel, brain signals revealed that increases in absolute incentive magnitude led to cue- and target-specific response modulations that were independent of sustained state effects across visual cortex, fronto-parietal regions, and subcortical regions. Interestingly, state-like effects of incentive were observed in several of these brain regions, too, suggesting that both transient and sustained fMRI signals may contribute to task performance. For both cue and block periods, the effects of administering incentives were correlated with individual trait measures of reward sensitivity. Taken together, our findings support the notion that motivation improves behavioral performance in a demanding attention task by enhancing evoked responses across a distributed set of anatomical sites, many of which have been previously implicated in attentional processing. However, the effect of motivation was not simply additive as the impact of absolute incentive was greater during invalid than valid trials in several brain regions, possibly because motivation had a larger effect on reorienting than orienting attentional mechanisms at these sites.

  2. Combined effects of attention and motivation on visual task performance: transient and sustained motivational effects

    Directory of Open Access Journals (Sweden)

    Jan B Engelmann

    2009-03-01

    Full Text Available We investigated how the brain integrates motivational and attentional signals by using a neuroimaging paradigm that provided separate estimates for transient cue- and target-related signals, in addition to sustained block-related responses. Participants performed a Posner-type task in which an endogenous cue predicted target location on 70% of trials, while motivation was manipulated by varying magnitude and valence of a cash incentive linked to task performance. Our findings revealed increased detection performance (d’ as a function of incentive value. In parallel, brain signals revealed that increases in absolute incentive magnitude led to cue- and target-specific response modulations that were independent of sustained state effects across visual cortex, fronto-parietal regions, and subcortical regions. Interestingly, state-like effects of incentive were observed in several of these brain regions, too, suggesting that both transient and sustained fMRI signals may contribute to task performance. For both cue and block periods, the effects of administering incentives were correlated with individual trait measures of reward sensitivity. Taken together, our findings support the notion that motivation improves behavioral performance in a demanding attention task by enhancing evoked responses across a distributed set of anatomical sites, many of which have been previously implicated in attentional processing. However, the effect of motivation was not simply additive as the impact of absolute incentive was greater during invalid than valid trials in several brain regions, possibly because motivation had a larger effect on reorienting than orienting attentional mechanisms at these sites.

  3. Realistic Avatar Eye and Head Animation Using a Neurobiological Model of Visual Attention

    National Research Council Canada - National Science Library

    Itti, L; Dhavale, N; Pighin, F

    2003-01-01

    We describe a neurobiological model of visual attention and eye/head movements in primates, and its application to the automatic animation of a realistic virtual human head watching an unconstrained...

  4. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  5. The effects of circadian phase, time awake, and imposed sleep restriction on performing complex visual tasks: evidence from comparative visual search.

    Science.gov (United States)

    Pomplun, Marc; Silva, Edward J; Ronda, Joseph M; Cain, Sean W; Münch, Mirjam Y; Czeisler, Charles A; Duffy, Jeanne F

    2012-07-26

    Cognitive performance not only differs between individuals, but also varies within them, influenced by factors that include sleep-wakefulness and biological time of day (circadian phase). Previous studies have shown that both factors influence accuracy rather than the speed of performing a visual search task, which can be hazardous in safety-critical tasks such as air-traffic control or baggage screening. However, prior investigations used simple, brief search tasks requiring little use of working memory. In order to study the effects of circadian phase, time awake, and chronic sleep restriction on the more realistic scenario of longer tasks requiring the sustained interaction of visual working memory and attentional control, the present study employed two comparative visual search tasks. In these tasks, participants had to detect a mismatch between two otherwise identical object distributions, with one of the tasks (mirror task) requiring an additional mental image transformation. Time awake and circadian phase both had significant influences on the speed, but not the accuracy of task performance. Over the course of three weeks of chronic sleep restriction, speed but not accuracy of task performance was impacted. The results suggest measures for safer performance of important tasks and point out the importance of minimizing the impact of circadian phase and sleep-wake history in laboratory vision experiments.

  6. Short-term visual performance of soft multifocal contact lenses for presbyopia

    Directory of Open Access Journals (Sweden)

    Jennifer Sha

    2016-04-01

    Full Text Available ABSTRACT Purpose: To compare visual acuity (VA, contrast sensitivity, stereopsis, and subjective visual performance of Acuvue® Oasys® for Presbyopia (AOP, Air Optix® Aqua Multifocal (AOMF, and Air Optix® Aqua Single Vision (AOSV lenses in patients with presbyopia. Methods: A single-blinded crossover trial was conducted. Twenty patients with mild presbyopia (add ≤+1.25 D and 22 with moderate/severe presbyopia (add ≥+1.50 D who wore lenses bilaterally for 1 h, with a minimum overnight washout period between the use of each lens. Measurements included high- and low-contrast visual acuity (HCVA and LCVA, respectively at a distance, contrast sensitivity (CS at a distance, HCVA at intermediate (70 cm and near (50 cm & 40 cm distances, stereopsis, and subjective questionnaires regarding vision clarity, ghosting, overall vision satisfaction, and comfort. The test variables were compared among the lens types using repeated-measures ANOVA. Results: Distance variables (HCVA, LCVA, and CS were significantly worse with multifocal lens than with AOSV lens (p≤0.008, except for AOMF lens in the mild presbyopia group in which no significant difference was observed (p>0.05. Multifocal lenses had significantly greater HCVA at 40 cm than AOSV lens (p≤0.026. AOMF lens had greater intermediate HCVA than AOP lens (p0.05. The proportions of patients willing to buy AOSV, AOMF, and AOP lenses were 20%, 40%, and 50%, respectively, in the mild presbyopia group and 14%, 32%, and 23%, respectively, in the moderate/severe presbyopia group; however, these differences were not statistically significant (p≥0.159. Conclusions: Further development of multifocal lenses is required before significant advantages of multifocal lenses over single vision lens are observed in patients with presbyopia.

  7. Short-term visual performance of soft multifocal contact lenses for presbyopia.

    Science.gov (United States)

    Sha, Jennifer; Bakaraju, Ravi C; Tilia, Daniel; Chung, Jiyoon; Delaney, Shona; Munro, Anna; Ehrmann, Klaus; Thomas, Varghese; Holden, Brien A

    2016-04-01

    To compare visual acuity (VA), contrast sensitivity, stereopsis, and subjective visual performance of Acuvue® Oasys® for Presbyopia (AOP), Air Optix® Aqua Multifocal (AOMF), and Air Optix® Aqua Single Vision (AOSV) lenses in patients with presbyopia. A single-blinded crossover trial was conducted. Twenty patients with mild presbyopia (add ≤+1.25 D) and 22 with moderate/severe presbyopia (add ≥+1.50 D) who wore lenses bilaterally for 1 h, with a minimum overnight washout period between the use of each lens. Measurements included high- and low-contrast visual acuity (HCVA and LCVA, respectively) at a distance, contrast sensitivity (CS) at a distance, HCVA at intermediate (70 cm) and near (50 cm & 40 cm) distances, stereopsis, and subjective questionnaires regarding vision clarity, ghosting, overall vision satisfaction, and comfort. The test variables were compared among the lens types using repeated-measures ANOVA. Distance variables (HCVA, LCVA, and CS) were significantly worse with multifocal lens than with AOSV lens (p≤0.008), except for AOMF lens in the mild presbyopia group in which no significant difference was observed (p>0.05). Multifocal lenses had significantly greater HCVA at 40 cm than AOSV lens (p≤0.026). AOMF lens had greater intermediate HCVA than AOP lens (ppresbyopia group (p≤0.03). Few significant differences in subjective variables were observed, with no significant difference in the overall vision satisfaction observed between lens types (p>0.05). The proportions of patients willing to buy AOSV, AOMF, and AOP lenses were 20%, 40%, and 50%, respectively, in the mild presbyopia group and 14%, 32%, and 23%, respectively, in the moderate/severe presbyopia group; however, these differences were not statistically significant (p≥0.159). Further development of multifocal lenses is required before significant advantages of multifocal lenses over single vision lens are observed in patients with presbyopia.

  8. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  9. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  10. Functional magnetic resonance imaging of the visual cortex performed in children under sedation to assist in presurgical planning.

    Science.gov (United States)

    Li, Weier; Wait, Scott D; Ogg, Robert J; Scoggins, Matt A; Zou, Ping; Wheless, James; Boop, Frederick A

    2013-05-01

    Advances in brain imaging have allowed for more sophisticated mapping of crucial neural structures. Functional MRI (fMRI) measures local changes in blood oxygenation associated with changes in neural activity and is useful in mapping cortical activation. Applications of this imaging modality have generally been restricted to cooperative patients; however, fMRI has proven successful in localizing the motor cortex for neurosurgical planning in uncooperative children under sedation. The authors demonstrate that the use of fMRI to localize the visual cortex in sedated children can be safely and effectively performed, allowing for more accurate presurgical planning to spare visual structures. Between 2007 and 2009, 11 children (age range 1-11 years) underwent fMRI for neurosurgical planning while under sedation. Blood oxygen level-dependent fMRI was performed to detect visual cortex activation during stimulation through closed eyelids. Visual stimulation was presented in block design with periods of flashing light alternated with darkness. Functional MRI was successful in identifying visual cortex in each of the 11 children tested. There were no complications with propofol sedation or the fMRI. All children suffered from epilepsy, 5 had brain tumors, and 1 had tuberous sclerosis. After fMRI was performed, 6 patients underwent surgery. Frameless stereotactic guidance was synchronized with fMRI data to design an approach to spare visual structures during resection. There were no cases where a false negative led to unexpected visual field deficits or other side effects of surgery. In 2 cases, the fMRI results demonstrated that the tracts were already disrupted: in one case from a prior tumor operation and in another from dysplasia. Functional MRI for evaluation of visual pathways can be safely and reproducibly performed in young or uncooperative children under light sedation. Identification of primary visual cortex aids in presurgical planning to avoid vision loss in

  11. Development of a Model Specification for Performance MonitoringSystems for Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Haves, Philip; Hitchcock, Robert J.; Gillespie, Kenneth L.; Brook, Martha; Shockman, Christine; Deringer, Joseph J.; Kinney,Kristopher L.

    2006-08-01

    The paper describes the development of a model specification for performance monitoring systems for commercial buildings. The specification focuses on four key aspects of performance monitoring: (1) performance metrics; (2) measurement system requirements; (3) data acquisition and archiving; and (4) data visualization and reporting. The aim is to assist building owners in specifying the extensions to their control systems that are required to provide building operators with the information needed to operate their buildings more efficiently and to provide automated diagnostic tools with the information required to detect and diagnose faults and problems that degrade energy performance. The paper reviews the potential benefits of performance monitoring, describes the specification guide and discusses briefly the ways in which it could be implemented. A prototype advanced visualization tool is also described, along with its application to performance monitoring. The paper concludes with a description of the ways in which the specification and the visualization tool are being disseminated and deployed.

  12. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  13. Virtual phacoemulsification surgical simulation using visual guidance and performance parameters as a feasible proficiency assessment tool.

    Science.gov (United States)

    Lam, Chee Kiang; Sundaraj, Kenneth; Sulaiman, Mohd Nazri; Qamarruddin, Fazilawati A

    2016-06-14

    Computer based surgical training is believed to be capable of providing a controlled virtual environment for medical professionals to conduct standardized training or new experimental procedures on virtual human body parts, which are generated and visualised three-dimensionally on a digital display unit. The main objective of this study was to conduct virtual phacoemulsification cataract surgery to compare performance by users with different proficiency on a virtual reality platform equipped with a visual guidance system and a set of performance parameters. Ten experienced ophthalmologists and six medical residents were invited to perform the virtual surgery of the four main phacoemulsification cataract surgery procedures - 1) corneal incision (CI), 2) capsulorhexis (C), 3) phacoemulsification (P), and 4) intraocular lens implantation (IOL). Each participant was required to perform the complete phacoemulsification cataract surgery using the simulator for three consecutive trials (a standardized 30-min session). The performance of the participants during the three trials was supported using a visual guidance system and evaluated by referring to a set of parameters that was implemented in the performance evaluation system of the simulator. Subjects with greater experience obtained significantly higher scores in all four main procedures - CI1 (ρ = 0.038), CI2 (ρ = 0.041), C1 (ρ = 0.032), P2 (ρ = 0.035) and IOL1 (ρ = 0.011). It was also found that experience improved the completion times in all modules - CI4 (ρ = 0.026), C4 (ρ = 0.018), P6 (ρ = 0.028) and IOL4 (ρ = 0.029). Positive correlation was observed between experience and anti-tremor - C2 (ρ = 0.026), P3 (ρ = 0.015), P4 (ρ = 0.042) and IOL2 (ρ = 0.048) and similarly with anti-rupture - CI3 (ρ = 0.013), C3 (ρ = 0.027), P5 (ρ = 0.021) and IOL3 (ρ = 0.041). No significant difference was observed between the groups with regards to

  14. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  15. Biofilm carrier migration model describes reactor performance.

    Science.gov (United States)

    Boltz, Joshua P; Johnson, Bruce R; Takács, Imre; Daigger, Glen T; Morgenroth, Eberhard; Brockmann, Doris; Kovács, Róbert; Calhoun, Jason M; Choubert, Jean-Marc; Derlon, Nicolas

    2017-06-01

    The accuracy of a biofilm reactor model depends on the extent to which physical system conditions (particularly bulk-liquid hydrodynamics and their influence on biofilm dynamics) deviate from the ideal conditions upon which the model is based. It follows that an improved capacity to model a biofilm reactor does not necessarily rely on an improved biofilm model, but does rely on an improved mathematical description of the biofilm reactor and its components. Existing biofilm reactor models typically include a one-dimensional biofilm model, a process (biokinetic and stoichiometric) model, and a continuous flow stirred tank reactor (CFSTR) mass balance that [when organizing CFSTRs in series] creates a pseudo two-dimensional (2-D) model of bulk-liquid hydrodynamics approaching plug flow. In such a biofilm reactor model, the user-defined biofilm area is specified for each CFSTR; thereby, Xcarrier does not exit the boundaries of the CFSTR to which they are assigned or exchange boundaries with other CFSTRs in the series. The error introduced by this pseudo 2-D biofilm reactor modeling approach may adversely affect model results and limit model-user capacity to accurately calibrate a model. This paper presents a new sub-model that describes the migration of Xcarrier and associated biofilms, and evaluates the impact that Xcarrier migration and axial dispersion has on simulated system performance. Relevance of the new biofilm reactor model to engineering situations is discussed by applying it to known biofilm reactor types and operational conditions.

  16. Rural–Urban Disparity in Students’ Academic Performance in Visual Arts Education

    Directory of Open Access Journals (Sweden)

    Nana Afia Amponsaa Opoku-Asare

    2015-12-01

    Full Text Available Rural–urban disparity in economic and social development in Ghana has led to disparities in educational resources and variations in students’ achievement in different parts of the country. Nonetheless, senior high schools (SHSs in rural and urban schools follow the same curriculum, and their students write the same West Africa Senior Secondary Certificate Examination (WASSCE, which qualifies them to access higher e