WorldWideScience

Sample records for computer driven visual

  1. Security and policy driven computing

    CERN Document Server

    Liu, Lei

    2010-01-01

    Security and Policy Driven Computing covers recent advances in security, storage, parallelization, and computing as well as applications. The author incorporates a wealth of analysis, including studies on intrusion detection and key management, computer storage policy, and transactional management.The book first describes multiple variables and index structure derivation for high dimensional data distribution and applies numeric methods to proposed search methods. It also focuses on discovering relations, logic, and knowledge for policy management. To manage performance, the text discusses con

  2. Query-Driven Visualization and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.; Wu, Kesheng

    2012-11-01

    This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing of the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.

  3. Visualizing the Computational Intelligence Field

    NARCIS (Netherlands)

    L. Waltman (Ludo); J.H. van den Berg (Jan); U. Kaymak (Uzay); N.J.P. van Eck (Nees Jan)

    2006-01-01

    textabstractIn this paper, we visualize the structure and the evolution of the computational intelligence (CI) field. Based on our visualizations, we analyze the way in which the CI field is divided into several subfields. The visualizations provide insight into the characteristics of each subfield

  4. Visualization in scientific computing

    National Research Council Canada - National Science Library

    Nielson, Gregory M; Shriver, Bruce D; Rosenblum, Lawrence J

    1990-01-01

    The purpose of this text is to provide a reference source to scientists, engineers, and students who are new to scientific visualization or who are interested in expanding their knowledge in this subject...

  5. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  6. Consistent data-driven computational mechanics

    Science.gov (United States)

    González, D.; Chinesta, F.; Cueto, E.

    2018-05-01

    We present a novel method, within the realm of data-driven computational mechanics, to obtain reliable and thermodynamically sound simulation from experimental data. We thus avoid the need to fit any phenomenological model in the construction of the simulation model. This kind of techniques opens unprecedented possibilities in the framework of data-driven application systems and, particularly, in the paradigm of industry 4.0.

  7. Consumer Driven Computer Game Design

    OpenAIRE

    Trappey, Charles

    2005-01-01

    The Critical Incident Techniques (CIT) is widely used to study customer satisfaction and dissatisfaction in the service industry. CIT provides questionnaire respondents with an open format to describe in their own words incidents that create lasting impressions. The purpose of this research is to develop a methodology for computer game design with the goal and intent of creating games that increase the consumer’s satisfaction through play. Too often game designers, either with or without inte...

  8. Model-Driven Study of Visual Memory

    National Research Council Canada - National Science Library

    Sekuler, Robert

    2004-01-01

    .... We synthesized concepts, insights, and methods from memory research, and from vision research, working within a coherent, quantitative framework for understanding episodic visual recognition memory...

  9. Large Field Visualization with Demand-Driven Calculation

    Science.gov (United States)

    Moran, Patrick J.; Henze, Chris

    1999-01-01

    We present a system designed for the interactive definition and visualization of fields derived from large data sets: the Demand-Driven Visualizer (DDV). The system allows the user to write arbitrary expressions to define new fields, and then apply a variety of visualization techniques to the result. Expressions can include differential operators and numerous other built-in functions, ail of which are evaluated at specific field locations completely on demand. The payoff of following a demand-driven design philosophy throughout becomes particularly evident when working with large time-series data, where the costs of eager evaluation alternatives can be prohibitive.

  10. Dictionary learning in visual computing

    CERN Document Server

    Zhang, Qiang

    2015-01-01

    The last few years have witnessed fast development on dictionary learning approaches for a set of visual computing tasks, largely due to their utilization in developing new techniques based on sparse representation. Compared with conventional techniques employing manually defined dictionaries, such as Fourier Transform and Wavelet Transform, dictionary learning aims at obtaining a dictionary adaptively from the data so as to support optimal sparse representation of the data. In contrast to conventional clustering algorithms like K-means, where a data point is associated with only one cluster c

  11. New tools to aid in scientific computing and visualization

    International Nuclear Information System (INIS)

    Wallace, M.G.; Christian-Frear, T.L.

    1992-01-01

    In this paper, two computer programs are described which aid in the pre- and post-processing of computer generated data. CoMeT (Computational Mechanics Toolkit) is a customizable, interactive, graphical, menu-driven program that provides the analyst with a consistent user-friendly interface to analysis codes. Trans Vol (Transparent Volume Visualization) is a specialized tool for the scientific three-dimensional visualization of complex solids by the technique of volume rendering. Both tools are described in basic detail along with an application example concerning the simulation of contaminant migration from an underground nuclear repository

  12. The Computational Anatomy of Visual Neglect.

    Science.gov (United States)

    Parr, Thomas; Friston, Karl J

    2018-02-01

    Visual neglect is a debilitating neuropsychological phenomenon that has many clinical implications and-in cognitive neuroscience-offers an important lesion deficit model. In this article, we describe a computational model of visual neglect based upon active inference. Our objective is to establish a computational and neurophysiological process theory that can be used to disambiguate among the various causes of this important syndrome; namely, a computational neuropsychology of visual neglect. We introduce a Bayes optimal model based upon Markov decision processes that reproduces the visual searches induced by the line cancellation task (used to characterize visual neglect at the bedside). We then consider 3 distinct ways in which the model could be lesioned to reproduce neuropsychological (visual search) deficits. Crucially, these 3 levels of pathology map nicely onto the neuroanatomy of saccadic eye movements and the systems implicated in visual neglect. © The Author 2017. Published by Oxford University Press.

  13. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  14. Visualization Tools for Teaching Computer Security

    Science.gov (United States)

    Yuan, Xiaohong; Vega, Percy; Qadah, Yaseen; Archer, Ricky; Yu, Huiming; Xu, Jinsheng

    2010-01-01

    Using animated visualization tools has been an important teaching approach in computer science education. We have developed three visualization and animation tools that demonstrate various information security concepts and actively engage learners. The information security concepts illustrated include: packet sniffer and related computer network…

  15. Data-Driven Healthcare: Challenges and Opportunities for Interactive Visualization.

    Science.gov (United States)

    Gotz, David; Borland, David

    2016-01-01

    The healthcare industry's widespread digitization efforts are reshaping one of the largest sectors of the world's economy. This transformation is enabling systems that promise to use ever-improving data-driven evidence to help doctors make more precise diagnoses, institutions identify at risk patients for intervention, clinicians develop more personalized treatment plans, and researchers better understand medical outcomes within complex patient populations. Given the scale and complexity of the data required to achieve these goals, advanced data visualization tools have the potential to play a critical role. This article reviews a number of visualization challenges unique to the healthcare discipline.

  16. Visualizing a silicon quantum computer

    International Nuclear Information System (INIS)

    Sanders, Barry C; Hollenberg, Lloyd C L; Edmundson, Darran; Edmundson, Andrew

    2008-01-01

    Quantum computation is a fast-growing, multi-disciplinary research field. The purpose of a quantum computer is to execute quantum algorithms that efficiently solve computational problems intractable within the existing paradigm of 'classical' computing built on bits and Boolean gates. While collaboration between computer scientists, physicists, chemists, engineers, mathematicians and others is essential to the project's success, traditional disciplinary boundaries can hinder progress and make communicating the aims of quantum computing and future technologies difficult. We have developed a four minute animation as a tool for representing, understanding and communicating a silicon-based solid-state quantum computer to a variety of audiences, either as a stand-alone animation to be used by expert presenters or embedded into a longer movie as short animated sequences. The paper includes a generally applicable recipe for successful scientific animation production.

  17. Visualizing a silicon quantum computer

    Science.gov (United States)

    Sanders, Barry C.; Hollenberg, Lloyd C. L.; Edmundson, Darran; Edmundson, Andrew

    2008-12-01

    Quantum computation is a fast-growing, multi-disciplinary research field. The purpose of a quantum computer is to execute quantum algorithms that efficiently solve computational problems intractable within the existing paradigm of 'classical' computing built on bits and Boolean gates. While collaboration between computer scientists, physicists, chemists, engineers, mathematicians and others is essential to the project's success, traditional disciplinary boundaries can hinder progress and make communicating the aims of quantum computing and future technologies difficult. We have developed a four minute animation as a tool for representing, understanding and communicating a silicon-based solid-state quantum computer to a variety of audiences, either as a stand-alone animation to be used by expert presenters or embedded into a longer movie as short animated sequences. The paper includes a generally applicable recipe for successful scientific animation production.

  18. Visualizing a silicon quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, Barry C [Institute for Quantum Information Science, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Hollenberg, Lloyd C L [ARC Centre of Excellence for Quantum Computer Technology, School of Physics, University of Melbourne, Victoria 3010 (Australia); Edmundson, Darran; Edmundson, Andrew [EDM Studio Inc., Level 2, 850 16 Avenue SW, Calgary, Alberta T2R 0S9 (Canada)], E-mail: bsanders@qis.ucalgary.ca, E-mail: lloydch@unimelb.edu.au, E-mail: darran@edmstudio.com

    2008-12-15

    Quantum computation is a fast-growing, multi-disciplinary research field. The purpose of a quantum computer is to execute quantum algorithms that efficiently solve computational problems intractable within the existing paradigm of 'classical' computing built on bits and Boolean gates. While collaboration between computer scientists, physicists, chemists, engineers, mathematicians and others is essential to the project's success, traditional disciplinary boundaries can hinder progress and make communicating the aims of quantum computing and future technologies difficult. We have developed a four minute animation as a tool for representing, understanding and communicating a silicon-based solid-state quantum computer to a variety of audiences, either as a stand-alone animation to be used by expert presenters or embedded into a longer movie as short animated sequences. The paper includes a generally applicable recipe for successful scientific animation production.

  19. Visual implementation of computer communication

    OpenAIRE

    Gunnarsson, Tobias; Johansson, Hans

    2010-01-01

    Communication is a fundamental part of life and during the 20th century several new ways for communication has been developed and created. From the first telegraph which made it possible to send messages over long distances to radio communication and the telephone. In the last decades, computer to computer communication at high speed has become increasingly important, and so also the need for understanding computer communication. Since data communication today works in speeds that are so high...

  20. Science-Driven Computing: NERSC's Plan for 2006-2010

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.; Banda,Michael J.; Bethel, E. Wes; Craw, James M.; Fortney, William J.; Hules,John A.; Meyer, Nancy L.; Meza, Juan C.; Ng, Esmond G.; Rippe, Lynn E.; Saphir, William C.; Verdier, Francesca; Walter, Howard A.; Yelick,Katherine A.

    2005-05-16

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise of the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.

  1. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    Science.gov (United States)

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  2. Computational foundations of the visual number sense.

    Science.gov (United States)

    Stoianov, Ivilin Peev; Zorzi, Marco

    2017-01-01

    We provide an emergentist perspective on the computational mechanism underlying numerosity perception, its development, and the role of inhibition, based on our deep neural network model. We argue that the influence of continuous visual properties does not challenge the notion of number sense, but reveals limit conditions for the computation that yields invariance in numerosity perception. Alternative accounts should be formalized in a computational model.

  3. Visualization of unsteady computational fluid dynamics

    Science.gov (United States)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  4. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  5. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  6. Eye structure, activity rhythms and visually-driven behavior are tuned to visual niche in ants

    Directory of Open Access Journals (Sweden)

    Ayse eYilmaz

    2014-06-01

    Full Text Available Insects have evolved physiological adaptations and behavioural strategies that allow them to cope with a broad spectrum of environmental challenges and contribute to their evolutionary success. Visual performance plays a key role in this success. Correlates between life style and eye organization have been reported in various insect species. Yet, if and how visual ecology translates effectively into different visual discrimination and learning capabilities has been less explored. Here we report results from optical and behavioural analyses performed in two sympatric ant species, Formica cunicularia and Camponotus aethiops. We show that the former are diurnal while the latter are cathemeral. Accordingly, F. cunicularia workers present compound eyes with higher resolution, while C. aethiops workers exhibit eyes with lower resolution but higher sensitivity. The discrimination and learning of visual stimuli differs significantly between these species in controlled dual-choice experiments: discrimination learning of small-field visual stimuli is achieved by F. cunicularia but not by C. aethiops, while both species master the discrimination of large-field visual stimuli. Our work thus provides a paradigmatic example about how timing of foraging activities and visual environment match the organization of compound eyes and visually-driven behaviour. This correspondence underlines the relevance of an ecological/evolutionary framework for analyses in behavioural neuroscience.

  7. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  8. Biomedical Visual Computing: Case Studies and Challenges

    KAUST Repository

    Johnson, Christopher

    2012-01-01

    Advances in computational geometric modeling, imaging, and simulation let researchers build and test models of increasing complexity, generating unprecedented amounts of data. As recent research in biomedical applications illustrates, visualization will be critical in making this vast amount of data usable; it\\'s also fundamental to understanding models of complex phenomena. © 2012 IEEE.

  9. Biomedical Visual Computing: Case Studies and Challenges

    KAUST Repository

    Johnson, Christopher

    2012-01-01

    Advances in computational geometric modeling, imaging, and simulation let researchers build and test models of increasing complexity, generating unprecedented amounts of data. As recent research in biomedical applications illustrates, visualization will be critical in making this vast amount of data usable; it's also fundamental to understanding models of complex phenomena. © 2012 IEEE.

  10. Qubus ancilla-driven quantum computation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Katherine Louise [School of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70808, United States and School of Physics and Astronomy, University of Leeds, LS2 9JT (United Kingdom); De, Suvabrata; Kendon, Viv [School of Physics and Astronomy, University of Leeds, LS2 9JT (United Kingdom); Munro, Bill [National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan and NTT Basic Research Laboratories, 3-1, Morinosato Wakamiya Atsugi-shi, Kanagawa 243-0198 (Japan)

    2014-12-04

    Hybrid matter-optical systems offer a robust, scalable path to quantum computation. Such systems have an ancilla which acts as a bus connecting the qubits. We demonstrate how using a continuous variable qubus as the ancilla provides savings in the total number of operations required when computing with many qubits.

  11. Data-Driven Visualization and Group Analysis of Multichannel EEG Coherence with Functional Units

    NARCIS (Netherlands)

    Caat, Michael ten; Maurits, Natasha M.; Roerdink, Jos B.T.M.

    2008-01-01

    A typical data- driven visualization of electroencephalography ( EEG) coherence is a graph layout, with vertices representing electrodes and edges representing significant coherences between electrode signals. A drawback of this layout is its visual clutter for multichannel EEG. To reduce clutter,

  12. Computational and Experimental Approaches to Visual Aesthetics

    Science.gov (United States)

    Brachmann, Anselm; Redies, Christoph

    2017-01-01

    Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491

  13. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  14. Computer-based visual communication in aphasia.

    Science.gov (United States)

    Steele, R D; Weinrich, M; Wertz, R T; Kleczewska, M K; Carlson, G S

    1989-01-01

    The authors describe their recently developed Computer-aided VIsual Communication (C-VIC) system, and report results of single-subject experimental designs probing its use with five chronic, severely impaired aphasic individuals. Studies replicate earlier results obtained with a non-computerized system, demonstrate patient competence with the computer implementation, extend the system's utility, and identify promising areas of application. Results of the single-subject experimental designs clarify patients' learning, generalization, and retention patterns, and highlight areas of performance difficulties. Future directions for the project are indicated.

  15. Attention and visual memory in visualization and computer graphics.

    Science.gov (United States)

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  16. Allen Brain Atlas-Driven Visualizations: a web-based gene expression energy visualization tool.

    Science.gov (United States)

    Zaldivar, Andrew; Krichmar, Jeffrey L

    2014-01-01

    The Allen Brain Atlas-Driven Visualizations (ABADV) is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA) across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  17. Allen Brain Atlas-Driven Visualizations: A Web-Based Gene Expression Energy Visualization Tool

    Directory of Open Access Journals (Sweden)

    Andrew eZaldivar

    2014-05-01

    Full Text Available The Allen Brain Atlas-Driven Visualizations (ABADV is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  18. Computer simulation of driven Alfven waves

    International Nuclear Information System (INIS)

    Geary, J.L. Jr.

    1986-01-01

    The first particle simulation study of shear Alfven wave resonance heating is presented. Particle simulation codes self-consistently follow the time evolution of the individual and collective aspects of particle dynamics as well as wave dynamics in a fully nonlinear fashion. Alfven wave heating is a possible means of increasing the temperature of magnetized plasmas. A new particle simulation model was developed for this application that incorporates Darwin's formulation of the electromagnetic fields with a guiding center approximation for electron motion perpendicular to the ambient magnetic field. The implementation of this model and the examination of its theoretical and computational properties are presented. With this model, several cases of Alfven wave heating is examined in both uniform and nonuniform simulation systems in a two dimensional slab. For the inhomogeneous case studies, the kinetic Alfven wave develops in the vicinity of the shear Alfven resonance region

  19. Computer assisted visualization of digital mammography images

    International Nuclear Information System (INIS)

    Funke, M.; Breiter, N.; Grabbe, E.; Netsch, T.; Biehl, M.; Peitgen, H.O.

    1999-01-01

    Purpose: In a clinical study, the feasibility of using a mammography workstation for the display and interpretation of digital mammography images was evaluated and the results were compared with the corresponding laser film hard copies. Materials and Methods: Digital phosphorous plate radiographs of the entire breast were obtained in 30 patients using a direct magnification mammography system. The images were displayed for interpretation on the computer monitor of a dedicated mammography workstation and also presented as laser film hard copies on a film view box for comparison. The images were evaluted with respect to the image handling, the image quality and the visualization of relevant structures by 3 readers. Results: Handling and contrast of the monitor displayed images were found to be superior compared with the film hard copies. Image noise was found in some cases but did not compromise the interpretation of the monitor images. The visualization of relevant structures was equal with both modalities. Altogether, image interpretation with the mammography workstation was considered to be easy, quick and confident. Conclusions: Computer-assisted visualization and interpretation of digital mammography images using a dedicated workstation can be performed with sufficiently high diagnostic accuracy. (orig.) [de

  20. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  1. Data-driven in computational plasticity

    Science.gov (United States)

    Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.

    2018-05-01

    Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.

  2. Activity-Driven Computing Infrastructure - Pervasive Computing in Healthcare

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Olesen, Anders Konring

    In many work settings, and especially in healthcare, work is distributed among many cooperating actors, who are constantly moving around and are frequently interrupted. In line with other researchers, we use the term pervasive computing to describe a computing infrastructure that supports work...

  3. Specialized Computer Systems for Environment Visualization

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  4. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  5. A Computer-Based Visual Analog Scale,

    Science.gov (United States)

    1992-06-01

    34 keys on the computer keyboard or other input device. The initial position of the arrow is always in the center of the scale to prevent biasing the...3 REFERENCES 1. Gift, A.G., "Visual Analogue Scales: Measurement of Subjective Phenomena." Nursing Research, Vol. 38, pp. 286-288, 1989. 2. Ltmdberg...3. Menkes, D.B., Howard, R.C., Spears, G.F., and Cairns, E.R., "Salivary THC Following Cannabis Smoking Correlates With Subjective Intoxication and

  6. Interactive volume exploration of petascale microscopy data streams using a visualization-driven virtual memory approach

    KAUST Repository

    Hadwiger, Markus

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience. © 1995-2012 IEEE.

  7. Interactive volume exploration of petascale microscopy data streams using a visualization-driven virtual memory approach

    KAUST Repository

    Hadwiger, Markus; Beyer, Johanna; Jeong, Wonki; Pfister, Hanspeter

    2012-01-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience. © 1995-2012 IEEE.

  8. A computational theory of visual receptive fields.

    Science.gov (United States)

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative

  9. Feature-based memory-driven attentional capture: Visual working memory content affects visual attention.

    NARCIS (Netherlands)

    Olivers, C.N.L.; Meijer, F.; Theeuwes, J.

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly

  10. Locative media and data-driven computing experiments

    Directory of Open Access Journals (Sweden)

    Sung-Yueh Perng

    2016-06-01

    Full Text Available Over the past two decades urban social life has undergone a rapid and pervasive geocoding, becoming mediated, augmented and anticipated by location-sensitive technologies and services that generate and utilise big, personal, locative data. The production of these data has prompted the development of exploratory data-driven computing experiments that seek to find ways to extract value and insight from them. These projects often start from the data, rather than from a question or theory, and try to imagine and identify their potential utility. In this paper, we explore the desires and mechanics of data-driven computing experiments. We demonstrate how both locative media data and computing experiments are ‘staged’ to create new values and computing techniques, which in turn are used to try and derive possible futures that are ridden with unintended consequences. We argue that using computing experiments to imagine potential urban futures produces effects that often have little to do with creating new urban practices. Instead, these experiments promote Big Data science and the prospect that data produced for one purpose can be recast for another and act as alternative mechanisms of envisioning urban futures.

  11. Computer-graphic visualization of dynamics

    International Nuclear Information System (INIS)

    Stewart, H.B.

    1986-01-01

    As engineered systems become increasingly sophisticated and complex, questions of efficiency, reliability, and safety demand the application of more powerful methods of analysis. One indication of this is the accelerating trend away from purely static or quasi-steady system modeling toward models that include essentially dynamic behavior. It is here that the qualitative ideas of nonlinear dynamics, dealing as they do with the most typical behavior in real dynamical systems, can be expected to play an increasingly prominent role. As part of a continuing investigation of the most important low-order differential equations, an interactive computer graphics environment has been created for the study of systems in three-dimensional phase space. This environment makes available the basic control of both numerical simulation and graphic visualization by a specially designed menu system. A key ingredient in this environment is the possibility of graphic communication not only from machine to man, but also from man to machine. Thus to specify the starting point for a numerical integration, for example, the user points to a location in phase space on the screen of the graphics terminal (using crosshairs or a mouse and cursor), bypassing the necessity to give numerical values of the phase-space coordinates. By devising a flexible computer interface which implements conceptual approaches to phase-space analysis of dynamical systems, significant advances in understanding of prototypical differential equations have been achieved

  12. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    Science.gov (United States)

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  13. Visualizing Infrared (IR) Spectroscopy with Computer Animation

    Science.gov (United States)

    Abrams, Charles B.; Fine, Leonard W.

    1996-01-01

    IR Tutor, an interactive, animated infrared (IR) spectroscopy tutorial has been developed for Macintosh and IBM-compatible computers. Using unique color animation, complicated vibrational modes can be introduced to beginning students. Rules governing the appearance of IR absorption bands become obvious because the vibrational modes can be visualized. Each peak in the IR spectrum is highlighted, and the animation of the corresponding normal mode can be shown. Students can study each spectrum stepwise, or click on any individual peak to see its assignment. Important regions of each spectrum can be expanded and spectra can be overlaid for comparison. An introduction to the theory of IR spectroscopy is included, making the program a complete instructional package. Our own success in using this software for teaching and research in both academic and industrial environments will be described. IR Tutor consists of three sections: (1) The 'Introduction' is a review of basic principles of spectroscopy. (2) 'Theory' begins with the classical model of a simple diatomic molecule and is expanded to include larger molecules by introducing normal modes and group frequencies. (3) 'Interpretation' is the heart of the tutorial. Thirteen IR spectra are analyzed in detail, covering the most important functional groups. This section features color animation of each normal mode, full interactivity, overlay of related spectra, and expansion of important regions. This section can also be used as a reference.

  14. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David; Wolverton, Michael J.; Bruce, Joseph R.; Burtner, Edwin R.; Endert, Alexander

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying models of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.

  15. Image Visual Realism: From Human Perception to Machine Computation.

    Science.gov (United States)

    Fan, Shaojing; Ng, Tian-Tsong; Koenig, Bryan L; Herberg, Jonathan S; Jiang, Ming; Shen, Zhiqi; Zhao, Qi

    2017-08-30

    Visual realism is defined as the extent to which an image appears to people as a photo rather than computer generated. Assessing visual realism is important in applications like computer graphics rendering and photo retouching. However, current realism evaluation approaches use either labor-intensive human judgments or automated algorithms largely dependent on comparing renderings to reference images. We develop a reference-free computational framework for visual realism prediction to overcome these constraints. First, we construct a benchmark dataset of 2520 images with comprehensive human annotated attributes. From statistical modeling on this data, we identify image attributes most relevant for visual realism. We propose both empirically-based (guided by our statistical modeling of human data) and CNN-learned features to predict visual realism of images. Our framework has the following advantages: (1) it creates an interpretable and concise empirical model that characterizes human perception of visual realism; (2) it links computational features to latent factors of human image perception.

  16. Data-driven security analysis, visualization and dashboards

    CERN Document Server

    Jacobs, Jay

    2014-01-01

    Uncover hidden patterns of data and respond with countermeasures Security professionals need all the tools at their disposal to increase their visibility in order to prevent security breaches and attacks. This careful guide explores two of the most powerful ? data analysis and visualization. You'll soon understand how to harness and wield data, from collection and storage to management and analysis as well as visualization and presentation. Using a hands-on approach with real-world examples, this book shows you how to gather feedback, measure the effectiveness of your security methods, and ma

  17. Temporal stability of visual search-driven biometrics

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  18. Temporal Stability of Visual Search-Driven Biometrics

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  19. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  20. Visual problems in young adults due to computer use.

    Science.gov (United States)

    Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L

    2012-04-01

    Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization.

    Science.gov (United States)

    Marai, G Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage-and its evaluation-of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature.

  2. IPython interactive computing and visualization cookbook

    CERN Document Server

    Rossant, Cyrille

    2014-01-01

    Intended to anyone interested in numerical computing and data science: students, researchers, teachers, engineers, analysts, hobbyists... Basic knowledge of Python/NumPy is recommended. Some skills in mathematics will help you understand the theory behind the computational methods.

  3. Information visualization courses for students with a computer science background.

    Science.gov (United States)

    Kerren, Andreas

    2013-01-01

    Linnaeus University offers two master's courses in information visualization for computer science students with programming experience. This article briefly describes the syllabi, exercises, and practices developed for these courses.

  4. Graphical Visualization on Computational Simulation Using Shared Memory

    International Nuclear Information System (INIS)

    Lima, A B; Correa, Eberth

    2014-01-01

    The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results ''on the fly'' without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL

  5. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  6. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P

    1994-01-01

    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are

  7. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  8. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  9. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  10. Visual ergonomics and computer work--is it all about computer glasses?

    Science.gov (United States)

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  11. Computational Model of a Biomass Driven Absorption Refrigeration System

    Directory of Open Access Journals (Sweden)

    Munyeowaji Mbikan

    2017-02-01

    Full Text Available The impact of vapour compression refrigeration is the main push for scientists to find an alternative sustainable technology. Vapour absorption is an ideal technology which makes use of waste heat or renewable heat, such as biomass, to drive absorption chillers from medium to large applications. In this paper, the aim was to investigate the feasibility of a biomass driven aqua-ammonia absorption system. An estimation of the solid biomass fuel quantity required to provide heat for the operation of a vapour absorption refrigeration cycle (VARC is presented; the quantity of biomass required depends on the fuel density and the efficiency of the combustion and heat transfer systems. A single-stage aqua-ammonia refrigeration system analysis routine was developed to evaluate the system performance and ascertain the rate of energy transfer required to operate the system, and hence, the biomass quantity needed. In conclusion, this study demonstrated the results of the performance of a computational model of an aqua-ammonia system under a range of parameters. The model showed good agreement with published experimental data.

  12. Productivity associated with visual status of computer users.

    Science.gov (United States)

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  13. Visual basic application in computer hardware control and data ...

    African Journals Online (AJOL)

    A ULN2003A Relay Driver which contains 7 separate Darlington pairs with common emitters and three modified Transistor- Transistor-Logic circuit (i.e. 74LS365) were used to interface an analog-to-digital converter and the parallel port of the computer. The seven light emitting diodes were driven by the ULN2003A with ...

  14. Visual Hemispheric Specialization: A Computational Theory.

    Science.gov (United States)

    1985-10-31

    magnitude of difficulty of this problem becomes evident if you look at a digitized representation of a picture, with numbers representing the intensity...brain is not a digital computer; it does not pass discrete symbols back and forth. Rather, we assume that modules produce patterns of activity, which... Comercio Dr. David E. Clement Lisbon Department of Psychology PORTUGAL University of South Carolina Columbia, SC 29208 M.C.S. Louis Crocq Secretariat

  15. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  16. Neural computation of visual imaging based on Kronecker product in the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Guozheng Yao

    2010-03-01

    Full Text Available Abstract Background What kind of neural computation is actually performed by the primary visual cortex and how is this represented mathematically at the system level? It is an important problem in the visual information processing, but has not been well answered. In this paper, according to our understanding of retinal organization and parallel multi-channel topographical mapping between retina and primary visual cortex V1, we divide an image into orthogonal and orderly array of image primitives (or patches, in which each patch will evoke activities of simple cells in V1. From viewpoint of information processing, this activated process, essentially, involves optimal detection and optimal matching of receptive fields of simple cells with features contained in image patches. For the reconstruction of the visual image in the visual cortex V1 based on the principle of minimum mean squares error, it is natural to use the inner product expression in neural computation, which then is transformed into matrix form. Results The inner product is carried out by using Kronecker product between patches and function architecture (or functional column in localized and oriented neural computing. Compared with Fourier Transform, the mathematical description of Kronecker product is simple and intuitive, so is the algorithm more suitable for neural computation of visual cortex V1. Results of computer simulation based on two-dimensional Gabor pyramid wavelets show that the theoretical analysis and the proposed model are reasonable. Conclusions Our results are: 1. The neural computation of the retinal image in cortex V1 can be expressed to Kronecker product operation and its matrix form, this algorithm is implemented by the inner operation between retinal image primitives and primary visual cortex's column. It has simple, efficient and robust features, which is, therefore, such a neural algorithm, which can be completed by biological vision. 2. It is more suitable

  17. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  18. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  19. Visualization-based decision support for value-driven system design

    Science.gov (United States)

    Tibor, Elliott

    In the past 50 years, the military, communication, and transportation systems that permeate our world, have grown exponentially in size and complexity. The development and production of these systems has seen ballooning costs and increased risk. This is particularly critical for the aerospace industry. The inability to deal with growing system complexity is a crippling force in the advancement of engineered systems. Value-Driven Design represents a paradigm shift in the field of design engineering that has potential to help counteract this trend. The philosophy of Value-Driven Design places the desires of the stakeholder at the forefront of the design process to capture true preferences and reveal system alternatives that were never previously thought possible. Modern aerospace engineering design problems are large, complex, and involve multiple levels of decision-making. To find the best design, the decision-maker is often required to analyze hundreds or thousands of combinations of design variables and attributes. Visualization can be used to support these decisions, by communicating large amounts of data in a meaningful way. Understanding the design space, the subsystem relationships, and the design uncertainties is vital to the advancement of Value-Driven Design as an accepted process for the development of more effective, efficient, robust, and elegant aerospace systems. This research investigates the use of multi-dimensional data visualization tools to support decision-making under uncertainty during the Value-Driven Design process. A satellite design system comprising a satellite, ground station, and launch vehicle is used to demonstrate effectiveness of new visualization methods to aid in decision support during complex aerospace system design. These methods are used to facilitate the exploration of the feasible design space by representing the value impact of system attribute changes and comparing the results of multi-objective optimization formulations

  20. Three-dimensional computer visualization of forensic pathology data.

    Science.gov (United States)

    March, Jack; Schofield, Damian; Evison, Martin; Woodford, Noel

    2004-03-01

    Despite a decade of use in US courtrooms, it is only recently that forensic computer animations have become an increasingly important form of communication in legal spheres within the United Kingdom. Aims Research at the University of Nottingham has been influential in the critical investigation of forensic computer graphics reconstruction methodologies and techniques and in raising the profile of this novel form of data visualization within the United Kingdom. The case study presented demonstrates research undertaken by Aims Research and the Department of Forensic Pathology at the University of Sheffield, which aims to apply, evaluate, and develop novel 3-dimensional computer graphics (CG) visualization and virtual reality (VR) techniques in the presentation and investigation of forensic information concerning the human body. The inclusion of such visualizations within other CG or VR environments may ultimately provide the potential for alternative exploratory directions, processes, and results within forensic pathology investigations.

  1. From humans to computers cognition through visual perception

    CERN Document Server

    Alexandrov, Viktor Vasilievitch

    1991-01-01

    This book considers computer vision to be an integral part of the artificial intelligence system. The core of the book is an analysis of possible approaches to the creation of artificial vision systems, which simulate human visual perception. Much attention is paid to the latest achievements in visual psychology and physiology, the description of the functional and structural organization of the human perception mechanism, the peculiarities of artistic perception and the expression of reality. Computer vision models based on these data are investigated. They include the processes of external d

  2. An Application of Multivariate Statistical Analysis for Query-Driven Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Garth, Christoph [Univ. of California, Davis, CA (United States); Anderson, John C. [Univ. of California, Davis, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2011-03-01

    Driven by the ability to generate ever-larger, increasingly complex data, there is an urgent need in the scientific community for scalable analysis methods that can rapidly identify salient trends in scientific data. Query-Driven Visualization (QDV) strategies are among the small subset of techniques that can address both large and highly complex datasets. This paper extends the utility of QDV strategies with a statistics-based framework that integrates non-parametric distribution estimation techniques with a new segmentation strategy to visually identify statistically significant trends and features within the solution space of a query. In this framework, query distribution estimates help users to interactively explore their query's solution and visually identify the regions where the combined behavior of constrained variables is most important, statistically, to their inquiry. Our new segmentation strategy extends the distribution estimation analysis by visually conveying the individual importance of each variable to these regions of high statistical significance. We demonstrate the analysis benefits these two strategies provide and show how they may be used to facilitate the refinement of constraints over variables expressed in a user's query. We apply our method to datasets from two different scientific domains to demonstrate its broad applicability.

  3. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    International Nuclear Information System (INIS)

    Jiang, M.; de Vries, W.H.; Pertica, A.J.; Olivier, S.S.

    2011-01-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  4. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    Science.gov (United States)

    Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.

    2011-09-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  5. Three-Dimensional Computer Visualization of Forensic Pathology Data

    OpenAIRE

    March, Jack; Schofield, Damian; Evison, Martin; Woodford, Noel

    2004-01-01

    Despite a decade of use in US courtrooms, it is only recently that forensic computer animations have become an increasingly important form of communication in legal spheres within the United Kingdom. Aims Research at the University of Nottingham has been influential in the critical investigation of forensic computer graphics reconstruction methodologies and techniques and in raising the profile of this novel form of data visualization within the United Kingdom. The case study presented demons...

  6. VISUALIZATION METHODS OF VORTICAL FLOWS IN COMPUTATIONAL FLUID DYNAMICS AND THEIR APPLICATIONS

    Directory of Open Access Journals (Sweden)

    K. N. Volkov

    2014-05-01

    Full Text Available The paper deals with conceptions and methods for visual representation of research numerical results in the problems of fluid mechanics and gas. The three-dimensional nature of unsteady flow being simulated creates significant difficulties for the visual representation of results. It complicates control and understanding of numerical data, and exchange and processing of obtained information about the flow field. Approaches to vortical flows visualization with the usage of gradients of primary and secondary scalar and vector fields are discussed. An overview of visualization techniques for vortical flows using different definitions of the vortex and its identification criteria is given. Visualization examples for some solutions of gas dynamics problems related to calculations of jets and cavity flows are presented. Ideas of the vortical structure of the free non-isothermal jet and the formation of coherent vortex structures in the mixing layer are developed. Analysis of formation patterns for spatial flows inside large-scale vortical structures within the enclosed space of the cubic lid-driven cavity is performed. The singular points of the vortex flow in a cubic lid-driven cavity are found based on the results of numerical simulation; their type and location are identified depending on the Reynolds number. Calculations are performed with fine meshes and modern approaches to the simulation of vortical flows (direct numerical simulation and large-eddy simulation. Paradigm of graphical programming and COVISE virtual environment are used for the visual representation of computational results. Application that implements the visualization of the problem is represented as a network which links are modules and each of them is designed to solve a case-specific problem. Interaction between modules is carried out by the input and output ports (data receipt and data transfer giving the possibility to use various input and output devices.

  7. FAST: framework for heterogeneous medical image computing and visualization.

    Science.gov (United States)

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  8. Visual Reasoning in Computational Environment: A Case of Graph Sketching

    Science.gov (United States)

    Leung, Allen; Chan, King Wah

    2004-01-01

    This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…

  9. Urban camouflage assessment through visual search and computational saliency

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2013-01-01

    We present a new method to derive a multiscale urban camouflage pattern from a given set of background image samples. We applied this method to design a camouflage pattern for a given (semi-arid) urban environment. We performed a human visual search experiment and a computational evaluation study to

  10. Tactile Radar: experimenting a computer game with visually disabled.

    Science.gov (United States)

    Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana

    2017-09-18

    Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.

  11. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    Science.gov (United States)

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  13. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    Science.gov (United States)

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  14. Study Of Visual Disorders In Egyptian Computer Operators

    International Nuclear Information System (INIS)

    Al-Awadi, M.Y.; Awad Allah, H.; Hegazy, M. T.; Naguib, N.; Akmal, M.

    2012-01-01

    The aim of the study was to evaluate the probable effects of exposure to electromagnetic waves radiated from visual display terminals on some of visual functions. 300 computer operators working in different institutes were selected randomly. They were asked to fill a pre-tested questionnaire (written in Arabic) after obtaining their verbal consent. Among them, one hundred fifty exposed to visual display terminals were selected for the clinical study (group I). The control group includes one hundred fifty participants (their age matched with group I) but working in a field that did not expose to visual display terminals (group II). All chosen individuals were not suffering from any apparent health problems or any apparent diseases that could affect their visual conditions. All exposed candidates were using a VDT of LCD type size 15 and 17 and larger. Data entry and analysis were done using the SPSS version 17.0 applying appropriate statistical methods. The results showed that among the 150 exposed studied subjects, high significant occurrence of dryness and high significant association between occurrence of asthenopia and background variables (working hours using computers) were observed. Exposed subjects showed that 92% complained of tired eyes and eye strain, 37.33% complained of dry or sore eyes, 68% complained of headache, 68% complained of blurred distant vision 45.33% complained of asthenopia and 89.33% complained of neck, shoulder and back aches. Meantime, the control group showed that 18% complained of tired eyes, 21.33% of dry eyes and 12.67% of neck, shoulder and back aches. It could be concluded that prevalence of computer vision syndrome was noted to be quite high among computer operators.

  15. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  16. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  17. Data driven model generation based on computational intelligence

    Science.gov (United States)

    Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus

    2010-05-01

    The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion

  18. Entanglement-fidelity relations for inaccurate ancilla-driven quantum computation

    International Nuclear Information System (INIS)

    Morimae, Tomoyuki; Kahn, Jonas

    2010-01-01

    It was shown by T. Morimae [Phys. Rev. A 81, 060307(R) (2010)] that the gate fidelity of an inaccurate one-way quantum computation is upper bounded by a decreasing function of the amount of entanglement in the register. This means that a strong entanglement causes the low gate fidelity in the one-way quantum computation with inaccurate measurements. In this paper, we derive similar entanglement-fidelity relations for the inaccurate ancilla-driven quantum computation. These relations again imply that a strong entanglement in the register causes the low gate fidelity in the ancilla-driven quantum computation if the measurements on the ancilla are inaccurate.

  19. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

    Directory of Open Access Journals (Sweden)

    Na Shu

    Full Text Available Humans can easily understand other people's actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1, and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model.

  20. The Primary Visual Cortex Is Differentially Modulated by Stimulus-Driven and Top-Down Attention

    Science.gov (United States)

    Bekisz, Marek; Bogdan, Wojciech; Ghazaryan, Anaida; Waleszczyk, Wioletta J.; Kublik, Ewa; Wróbel, Andrzej

    2016-01-01

    Selective attention can be focused either volitionally, by top-down signals derived from task demands, or automatically, by bottom-up signals from salient stimuli. Because the brain mechanisms that underlie these two attention processes are poorly understood, we recorded local field potentials (LFPs) from primary visual cortical areas of cats as they performed stimulus-driven and anticipatory discrimination tasks. Consistent with our previous observations, in both tasks, we found enhanced beta activity, which we have postulated may serve as an attention carrier. We characterized the functional organization of task-related beta activity by (i) cortical responses (EPs) evoked by electrical stimulation of the optic chiasm and (ii) intracortical LFP correlations. During the anticipatory task, peripheral stimulation that was preceded by high-amplitude beta oscillations evoked large-amplitude EPs compared with EPs that followed low-amplitude beta. In contrast, during the stimulus-driven task, cortical EPs preceded by high-amplitude beta oscillations were, on average, smaller than those preceded by low-amplitude beta. Analysis of the correlations between the different recording sites revealed that beta activation maps were heterogeneous during the bottom-up task and homogeneous for the top-down task. We conclude that bottom-up attention activates cortical visual areas in a mosaic-like pattern, whereas top-down attentional modulation results in spatially homogeneous excitation. PMID:26730705

  1. Executive control of stimulus-driven and goal-directed attention in visual working memory.

    Science.gov (United States)

    Hu, Yanmei; Allen, Richard J; Baddeley, Alan D; Hitch, Graham J

    2016-10-01

    We examined the role of executive control in stimulus-driven and goal-directed attention in visual working memory using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data. Experiment 1 examined whether executive control underlies goal-directed prioritization of certain items within the sequence. Instructing participants to prioritize either the first or final item resulted in improved recall for these items, and an increase in concurrent task difficulty reduced or abolished these gains, consistent with their dependence on executive control. Experiment 2 examined whether executive control is also involved in the disruption caused by a post-series visual distractor (suffix). A demanding concurrent task disrupted memory for all items except the most recent, whereas a suffix disrupted only the most recent items. There was no interaction when concurrent load and suffix were combined, suggesting that deploying selective attention to ignore the distractor did not draw upon executive resources. A final experiment replicated the independent interfering effects of suffix and concurrent load while ruling out possible artifacts. We discuss the results in terms of a domain-general episodic buffer in which information is retained in a transient, limited capacity privileged state, influenced by both stimulus-driven and goal-directed processes. The privileged state contains the most recent environmental input together with goal-relevant representations being actively maintained using executive resources.

  2. Research for the design of visual fatigue based on the computer visual communication

    Science.gov (United States)

    Deng, Hu-Bin; Ding, Bao-min

    2013-03-01

    With the era of rapid development of computer networks. The role of network communication in the social, economic, political, become more and more important and suggested their special role. The computer network communicat ion through the modern media and byway of the visual communication effect the public of the emotional, spiritual, career and other aspects of the life. While its rapid growth also brought some problems, It is that their message across to the public, its design did not pass a relat ively perfect manifestation to express the informat ion. So this not only leads to convey the error message, but also to cause the physical and psychological fatigue for the audiences. It is said that the visual fatigue. In order to reduce the fatigue when people obtain the useful information in using computer. Let the audience in a short time to obtain the most useful informat ion, this article gave a detailed account of its causes, and propose effective solutions and, through the specific examples to explain it, also in the future computer design visual communicat ion applications development prospect.

  3. Towards deterministic optical quantum computation with coherently driven atomic ensembles

    International Nuclear Information System (INIS)

    Petrosyan, David

    2005-01-01

    Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons

  4. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  5. Ancilla-driven quantum computation for qudits and continuous variables

    Science.gov (United States)

    Proctor, Timothy; Giulian, Melissa; Korolkova, Natalia; Andersson, Erika; Kendon, Viv

    2017-05-01

    Although qubits are the leading candidate for the basic elements in a quantum computer, there are also a range of reasons to consider using higher-dimensional qudits or quantum continuous variables (QCVs). In this paper, we use a general "quantum variable" formalism to propose a method of quantum computation in which ancillas are used to mediate gates on a well-isolated "quantum memory" register and which may be applied to the setting of qubits, qudits (for d >2 ), or QCVs. More specifically, we present a model in which universal quantum computation may be implemented on a register using only repeated applications of a single fixed two-body ancilla-register interaction gate, ancillas prepared in a single state, and local measurements of these ancillas. In order to maintain determinism in the computation, adaptive measurements via a classical feed forward of measurement outcomes are used, with the method similar to that in measurement-based quantum computation (MBQC). We show that our model has the same hybrid quantum-classical processing advantages as MBQC, including the power to implement any Clifford circuit in essentially one layer of quantum computation. In some physical settings, high-quality measurements of the ancillas may be highly challenging or not possible, and hence we also present a globally unitary model which replaces the need for measurements of the ancillas with the requirement for ancillas to be prepared in states from a fixed orthonormal basis. Finally, we discuss settings in which these models may be of practical interest.

  6. On line and on paper: Visual representations, visual culture, and computer graphics in design engineering

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, K.

    1991-01-01

    The research presented examines the visual communication practices of engineers and the impact of the implementation of computer graphics on their visual culture. The study is based on participant observation of day-to-day practices in two contemporary industrial settings among engineers engaged in the actual process of designing new pieces of technology. In addition, over thirty interviews were conducted at other industrial sites to confirm that the findings were not an isolated phenomenon. The data show that there is no one best way' to use a computer graphics system, but rather that use is site specific and firms and individuals engage in mixed paper and electronic practices as well as differential use of electronic options to get the job done. This research illustrates that rigid models which assume a linear theory of innovation, projecting a straight-forward process from idea, to drawing, to prototype, to production, are seriously misguided.

  7. A computational approach for fluid queues driven by truncated birth-death processes.

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  8. A computer graphics system for visualizing spacecraft in orbit

    Science.gov (United States)

    Eyles, Don E.

    1989-01-01

    To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.

  9. Scientific visualization in computational aerodynamics at NASA Ames Research Center

    Science.gov (United States)

    Bancroft, Gordon V.; Plessel, Todd; Merritt, Fergus; Walatka, Pamela P.; Watson, Val

    1989-01-01

    The visualization methods used in computational fluid dynamics research at the NASA-Ames Numerical Aerodynamic Simulation facility are examined, including postprocessing, tracking, and steering methods. The visualization requirements of the facility's three-dimensional graphical workstation are outlined and the types hardware and software used to meet these requirements are discussed. The main features of the facility's current and next-generation workstations are listed. Emphasis is given to postprocessing techniques, such as dynamic interactive viewing on the workstation and recording and playback on videodisk, tape, and 16-mm film. Postprocessing software packages are described, including a three-dimensional plotter, a surface modeler, a graphical animation system, a flow analysis software toolkit, and a real-time interactive particle-tracer.

  10. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  11. Ontology-Driven Discovery of Scientific Computational Entities

    Science.gov (United States)

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  12. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated.

    Directory of Open Access Journals (Sweden)

    Merle-Marie Ahrens

    Full Text Available Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing. Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design and task-irrelevant (by instruction, and by creating instead endogenous (orthogonal expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.

  13. A visual interface to computer programs for linkage analysis.

    Science.gov (United States)

    Chapman, C J

    1990-06-01

    This paper describes a visual approach to the input of information about human families into computer data bases, making use of the GEM graphic interface on the Atari ST. Similar approaches could be used on the Apple Macintosh or on the IBM PC AT (to which it has been transferred). For occasional users of pedigree analysis programs, this approach has considerable advantages in ease of use and accessibility. An example of such use might be the analysis of risk in families with Huntington disease using linked RFLPs. However, graphic interfaces do make much greater demands on the programmers of these systems.

  14. Target-nontarget similarity decreases search efficiency and increases stimulus-driven control in visual search.

    Science.gov (United States)

    Barras, Caroline; Kerzel, Dirk

    2017-10-01

    Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target-nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.

  15. Community-driven computational biology with Debian Linux.

    Science.gov (United States)

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  16. Advanced Computational Models for Accelerator-Driven Systems

    International Nuclear Information System (INIS)

    Talamo, A.; Ravetto, P.; Gudowsk, W.

    2012-01-01

    In the nuclear engineering scientific community, Accelerator Driven Systems (ADSs) have been proposed and investigated for the transmutation of nuclear waste, especially plutonium and minor actinides. These fuels have a quite low effective delayed neutron fraction relative to uranium fuel, therefore the subcriticality of the core offers a unique safety feature with respect to critical reactors. The intrinsic safety of ADS allows the elimination of the operational control rods, hence the reactivity excess during burnup can be managed by the intensity of the proton beam, fuel shuffling, and eventually by burnable poisons. However, the intrinsic safety of a subcritical system does not guarantee that ADSs are immune from severe accidents (core melting), since the decay heat of an ADS is very similar to the one of a critical system. Normally, ADSs operate with an effective multiplication factor between 0.98 and 0.92, which means that the spallation neutron source contributes little to the neutron population. In addition, for 1 GeV incident protons and lead-bismuth target, about 50% of the spallation neutrons has energy below 1 MeV and only 15% of spallation neutrons has energies above 3 MeV. In the light of these remarks, the transmutation performances of ADS are very close to those of critical reactors.

  17. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    Science.gov (United States)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  18. Computer simulation of transport driven current in tokamaks

    International Nuclear Information System (INIS)

    Nunan, W.J.; Dawson, J.M.

    1993-01-01

    Plasma transport phenomena can drive large currents parallel to an externally applied magnetic field. The Bootstrap Current Theory accounts for the effect of Banana diffusion on toroidal current, but the effect is not confined to that transport regime. The authors' 2 1/2-D, electromagnetic, particle simulations have demonstrated that Maxwellian plasmas in static toroidal and vertical fields spontaneously develop significant toroidal current, even in the absence of the open-quotes seed currentclose quotes which the Bootstrap Theory requires. Other simulations, in both toroidal and straight cylindrical geometries, and without any externally imposed electric field, show that if the plasma column is centrally fueled, and if the particle diffusion coefficient exceeds the magnetic diffusion coefficient (as is true in most tokamaks) then the toroidal current grows steadily. The simulations indicate that such fueling, coupled with central heating due to fusion reactions may drive all of the tokamak's toroidal current. The Bootstrap and dynamo mechanisms do not drive toroidal current where the poloidal magnetic field is zero. The simulations, as well as initial theoretical work, indicate that in tokamak plasmas, various processes naturally transport current from the outer regions of the plasma to the magnetic axis. The mechanisms which cause this effective electron viscosity include conventional binary collisions, wave emission and reabsorption, and also convection associated with rvec E x rvec B vortex motion. The simulations also exhibit preferential loss of particles carrying current opposing the bulk plasma current. This preferential loss generates current even at the magnetic axis. If these self-seeding mechanisms function in experiments as they do in the simulations, then transport driven current would eliminate the need for any external current drive in tokamaks, except simple ohmic heating for initial generation of the plasma

  19. Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.

    Science.gov (United States)

    Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu

    2017-05-23

    This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.

  20. To be selected or not to be selected : a modeling and behavioral study of the mechanisms underlying stimulus-driven and top-down visual attention

    NARCIS (Netherlands)

    Voort van der Kleij, van der Gwendid T.

    2007-01-01

    This thesis investigates the mechanisms of stimulus-driven visual attention (global saliency), the mechanisms of top-down visual attention, and the interaction between these mechanisms, in visual search. Following the outline of an existing model of top-down visual attention, namely the Closed-Loop

  1. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  2. Exogenously-driven perceptual alternation of a bistable image: From the perspective of the visual change detection process.

    Science.gov (United States)

    Urakawa, Tomokazu; Aragaki, Tomoya; Araki, Osamu

    2017-07-13

    Based on the predictive coding framework, the present behavioral study focused on the automatic visual change detection process, which yields a concomitant prediction error, as one of the visual processes relevant to the exogenously-driven perceptual alternation of a bistable image. According to this perspective, we speculated that the automatic visual change detection process with an enhanced prediction error is relevant to the greater induction of exogenously-driven perceptual alternation and attempted to test this hypothesis. A modified version of the oddball paradigm was used based on previous electroencephalographic studies on visual change detection, in which the deviant and standard defined by the bar's orientation were symmetrically presented around a continuously presented Necker cube (a bistable image). By manipulating inter-stimulus intervals and the number of standard repetitions, we set three experimental blocks: HM, IM, and LM blocks, in which the strength of the prediction error to the deviant relative to the standard was expected to gradually decrease in that order. The results obtained showed that the deviant significantly increased perceptual alternation of the Necker cube over that by the standard from before to after the presentation of the deviant. Furthermore, the differential proportion of the deviant relative to the standard significantly decreased from the HM block to the IM and LM blocks. These results are consistent with our hypothesis, supporting the involvement of the automatic visual change detection process in the induction of exogenously-driven perceptual alternation. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A computational approach for a fluid queue driven by a truncated birth-death process

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    1999-01-01

    In this paper, we consider a fluid queue driven by a truncated birth-death process with general birth and death rates. We find the equilibrium distribution of the content of the fluid buffer by computing the eigenvalues and eigenvectors of an associated real tridiagonal matrix. We provide efficient

  4. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  5. VASCo: computation and visualization of annotated protein surface contacts

    Directory of Open Access Journals (Sweden)

    Thallinger Gerhard G

    2009-01-01

    Full Text Available Abstract Background Structural data from crystallographic analyses contain a vast amount of information on protein-protein contacts. Knowledge on protein-protein interactions is essential for understanding many processes in living cells. The methods to investigate these interactions range from genetics to biophysics, crystallography, bioinformatics and computer modeling. Also crystal contact information can be useful to understand biologically relevant protein oligomerisation as they rely in principle on the same physico-chemical interaction forces. Visualization of crystal and biological contact data including different surface properties can help to analyse protein-protein interactions. Results VASCo is a program package for the calculation of protein surface properties and the visualization of annotated surfaces. Special emphasis is laid on protein-protein interactions, which are calculated based on surface point distances. The same approach is used to compare surfaces of two aligned molecules. Molecular properties such as electrostatic potential or hydrophobicity are mapped onto these surface points. Molecular surfaces and the corresponding properties are calculated using well established programs integrated into the package, as well as using custom developed programs. The modular package can easily be extended to include new properties for annotation. The output of the program is most conveniently displayed in PyMOL using a custom-made plug-in. Conclusion VASCo supplements other available protein contact visualisation tools and provides additional information on biological interactions as well as on crystal contacts. The tool provides a unique feature to compare surfaces of two aligned molecules based on point distances and thereby facilitates the visualization and analysis of surface differences.

  6. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    2017-07-01

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability and accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.

  7. Cardiac-driven Pulsatile Motion of Intracranial Cerebrospinal Fluid Visualized Based on a Correlation Mapping Technique.

    Science.gov (United States)

    Yatsushiro, Satoshi; Sunohara, Saeko; Hayashi, Naokazu; Hirayama, Akihiro; Matsumae, Mitsunori; Atsumi, Hideki; Kuroda, Kagayaki

    2018-04-10

    A correlation mapping technique delineating delay time and maximum correlation for characterizing pulsatile cerebrospinal fluid (CSF) propagation was proposed. After proofing its technical concept, this technique was applied to healthy volunteers and idiopathic normal pressure hydrocephalus (iNPH) patients. A time-resolved three dimensional-phase contrast (3D-PC) sampled the cardiac-driven CSF velocity at 32 temporal points per cardiac period at each spatial location using retrospective cardiac gating. The proposed technique visualized distributions of propagation delay and correlation coefficient of the PC-based CSF velocity waveform with reference to a waveform at a particular point in the CSF space. The delay time was obtained as the amount of time-shift, giving the maximum correlation for the velocity waveform at an arbitrary location with that at the reference location. The validity and accuracy of the technique were confirmed in a flow phantom equipped with a cardiovascular pump. The technique was then applied to evaluate the intracranial CSF motions in young, healthy (N = 13), and elderly, healthy (N = 13) volunteers and iNPH patients (N = 13). The phantom study demonstrated that root mean square error of the delay time was 2.27%, which was less than the temporal resolution of PC measurement used in this study (3.13% of a cardiac cycle). The human studies showed a significant difference (P correlation coefficient between the young, healthy group and the other two groups. A significant difference (P correlation coefficients in intracranial CSF space among all groups. The result suggests that the CSF space compliance of iNPH patients was lower than that of healthy volunteers. The correlation mapping technique allowed us to visualize pulsatile CSF velocity wave propagations as still images. The technique may help to classify diseases related to CSF dynamics, such as iNPH.

  8. Experience-driven formation of parts-based representations in a model of layered visual memory

    Directory of Open Access Journals (Sweden)

    Jenia Jitsev

    2009-09-01

    Full Text Available Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.

  9. Computer-Based Tutoring of Visual Concepts: From Novice to Experts.

    Science.gov (United States)

    Sharples, Mike

    1991-01-01

    Description of ways in which computers might be used to teach visual concepts discusses hypermedia systems; describes computer-generated tutorials; explains the use of computers to create learning aids such as concept maps, feature spaces, and structural models; and gives examples of visual concept teaching in medical education. (10 references)…

  10. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    Directory of Open Access Journals (Sweden)

    Alexis D. J. Makin

    2016-03-01

    Full Text Available Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene and orientation (0° to 90°, orientation, ORI gene. An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  11. Emotion-prints: interaction-driven emotion visualization on multi-touch interfaces

    Science.gov (United States)

    Cernea, Daniel; Weber, Christopher; Ebert, Achim; Kerren, Andreas

    2015-01-01

    Emotions are one of the unique aspects of human nature, and sadly at the same time one of the elements that our technological world is failing to capture and consider due to their subtlety and inherent complexity. But with the current dawn of new technologies that enable the interpretation of emotional states based on techniques involving facial expressions, speech and intonation, electrodermal response (EDS) and brain-computer interfaces (BCIs), we are finally able to access real-time user emotions in various system interfaces. In this paper we introduce emotion-prints, an approach for visualizing user emotional valence and arousal in the context of multi-touch systems. Our goal is to offer a standardized technique for representing user affective states in the moment when and at the location where the interaction occurs in order to increase affective self-awareness, support awareness in collaborative and competitive scenarios, and offer a framework for aiding the evaluation of touch applications through emotion visualization. We show that emotion-prints are not only independent of the shape of the graphical objects on the touch display, but also that they can be applied regardless of the acquisition technique used for detecting and interpreting user emotions. Moreover, our representation can encode any affective information that can be decomposed or reduced to Russell's two-dimensional space of valence and arousal. Our approach is enforced by a BCI-based user study and a follow-up discussion of advantages and limitations.

  12. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  13. Visualizing risks in cancer communication: A systematic review of computer-supported visual aids.

    Science.gov (United States)

    Stellamanns, Jan; Ruetters, Dana; Dahal, Keshav; Schillmoeller, Zita; Huebner, Jutta

    2017-08-01

    Health websites are becoming important sources for cancer information. Lay users, patients and carers seek support for critical decisions, but they are prone to common biases when quantitative information is presented. Graphical representations of risk data can facilitate comprehension, and interactive visualizations are popular. This review summarizes the evidence on computer-supported graphs that present risk data and their effects on various measures. The systematic literature search was conducted in several databases, including MEDLINE, EMBASE and CINAHL. Only studies with a controlled design were included. Relevant publications were carefully selected and critically appraised by two reviewers. Thirteen studies were included. Ten studies evaluated static graphs and three dynamic formats. Most decision scenarios were hypothetical. Static graphs could improve accuracy, comprehension, and behavioural intention. But the results were heterogeneous and inconsistent among the studies. Dynamic formats were not superior or even impaired performance compared to static formats. Static graphs show promising but inconsistent results, while research on dynamic visualizations is scarce and must be interpreted cautiously due to methodical limitations. Well-designed and context-specific static graphs can support web-based cancer risk communication in particular populations. The application of dynamic formats cannot be recommended and needs further research. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Graphics and Visualization: Breaking New Frontiers (Introduction to the Special Theme Section on "Computer Graphics and Visualization")

    OpenAIRE

    O'Sullivan, Carol; Scopigno, Roberto

    2001-01-01

    From the early graphical applications such as flight simulators, to today's stunning special effects in movies, computer graphics have had a significant impact upon the way computers have been used to represent and visualize the world. There are many big problems left to be solved, some of which are reflected in the following pages of this issue.

  15. Instrumentation in Support of Interactive Visualization, Computation and Simulation

    National Research Council Canada - National Science Library

    Wegman, Edward

    1997-01-01

    ... and related spatial and volumetric visualization problems. By virtual environments, we meant an immersive visual and audio technology such that experimenter has little or no awareness of the real environment...

  16. Introduction of computing in physics learning visual programing

    International Nuclear Information System (INIS)

    Kim, Cheung Seop

    1999-12-01

    This book introduces physics and programing, foundation of visual basic, grammar of visual basic, visual programing, solution of equation, calculation of matrix, solution of simultaneous equation, differentiation, differential equation, simultaneous differential equation and second-order differential equation, integration and solution of partial differential equation. It also covers basic language, terms of visual basic, usage of method, graphic method, step by step method, fails-position method, Gauss elimination method, difference method and Euler method.

  17. Ground-based PIV and numerical flow visualization results from the Surface Tension Driven Convection Experiment

    Science.gov (United States)

    Pline, Alexander D.; Werner, Mark P.; Hsieh, Kwang-Chung

    1991-01-01

    The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.

  18. 38 CFR 4.76a - Computation of average concentric contraction of visual fields.

    Science.gov (United States)

    2010-07-01

    ... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...

  19. Towards computer-based perception by modeling visual perception : A probalistic theory

    NARCIS (Netherlands)

    Ciftcioglu, O.; Bittermann, M.; Sariyildiz, S.

    2006-01-01

    Studies on computer-based perception by vision modelling are described. The visual perception is mathematically modelled where the model receives and interprets visual data from the environment. The perception is defined in probabilistic terms so that it is in the same way quantified. Human visual

  20. 3D computer visualization and animation of CANDU reactor core

    International Nuclear Information System (INIS)

    Qian, T.; Echlin, M.; Tonner, P.; Sur, B.

    1999-01-01

    Three-dimensional (3D) computer visualization and animation models of typical CANDU reactor cores (Darlington, Point Lepreau) have been developed using world-wide-web (WWW) browser based tools: JavaScript, hyper-text-markup language (HTML) and virtual reality modeling language (VRML). The 3D models provide three-dimensional views of internal control and monitoring structures in the reactor core, such as fuel channels, flux detectors, liquid zone controllers, zone boundaries, shutoff rods, poison injection tubes, ion chambers. Animations have been developed based on real in-core flux detector responses and rod position data from reactor shutdown. The animations show flux changing inside the reactor core with the drop of shutoff rods and/or the injection of liquid poison. The 3D models also provide hypertext links to documents giving specifications and historical data for particular components. Data in HTML format (or other format such as PDF, etc.) can be shown in text, tables, plots, drawings, etc., and further links to other sources of data can also be embedded. This paper summarizes the use of these WWW browser based tools, and describes the resulting 3D reactor core static and dynamic models. Potential applications of the models are discussed. (author)

  1. Image communication scheme based on dynamic visual cryptography and computer generated holography

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  2. The Application of Visual Basic Computer Programming Language to Simulate Numerical Iterations

    Directory of Open Access Journals (Sweden)

    Abdulkadir Baba HASSAN

    2006-06-01

    Full Text Available This paper examines the application of Visual Basic Computer Programming Language to Simulate Numerical Iterations, the merit of Visual Basic as a Programming Language and the difficulties faced when solving numerical iterations analytically, this research paper encourage the uses of Computer Programming methods for the execution of numerical iterations and finally fashion out and develop a reliable solution using Visual Basic package to write a program for some selected iteration problems.

  3. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli.

    NARCIS (Netherlands)

    Klink, P.C.; van Wezel, R.J.A.; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  4. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli

    NARCIS (Netherlands)

    Klink, P. C; van Wezel, Richard Jack Anton; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  5. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    Science.gov (United States)

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges

  6. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  7. A hybrid source-driven method to compute fast neutron fluence in reactor pressure vessel - 017

    International Nuclear Information System (INIS)

    Ren-Tai, Chiang

    2010-01-01

    A hybrid source-driven method is developed to compute fast neutron fluence with neutron energy greater than 1 MeV in nuclear reactor pressure vessel (RPV). The method determines neutron flux by solving a steady-state neutron transport equation with hybrid neutron sources composed of peripheral fixed fission neutron sources and interior chain-reacted fission neutron sources. The relative rod-by-rod power distribution of the peripheral assemblies in a nuclear reactor obtained from reactor core depletion calculations and subsequent rod-by-rod power reconstruction is employed as the relative rod-by-rod fixed fission neutron source distribution. All fissionable nuclides other than U-238 (such as U-234, U-235, U-236, Pu-239 etc) are replaced with U-238 to avoid counting the fission contribution twice and to preserve fast neutron attenuation for heavy nuclides in the peripheral assemblies. An example is provided to show the feasibility of the method. Since the interior fuels only have a marginal impact on RPV fluence results due to rapid attenuation of interior fast fission neutrons, a generic set or one of several generic sets of interior fuels can be used as the driver and only the neutron sources in the peripheral assemblies will be changed in subsequent hybrid source-driven fluence calculations. Consequently, this hybrid source-driven method can simplify and reduce cost for fast neutron fluence computations. This newly developed hybrid source-driven method should be a useful and simplified tool for computing fast neutron fluence at selected locations of interest in RPV of contemporary nuclear power reactors. (authors)

  8. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  9. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  10. Interactive Computer Visualization in the Introductory Chemistry Curriculum

    Science.gov (United States)

    Bragin, Victoria M.

    1996-08-01

    Increasingly, chemistry instructors, especially in two-year colleges, find themselves teaching classes where there is great disparity in the academic preparation of the students and where even those students with good mathematics and basic science backgrounds have poor English language and communication skills. This project explores the use of technological innovations to facilitate learning in introductory chemistry courses by those with a poor academic background, while also challenging those prepared to master the curriculum. An additional objective is to improve the communication skills of all students. Material is presented visually and in as engaging a fashion as possible, students are provided ready access to relevant information about the course content in ways that are adapted to their individual learning styles, and collaborative learning is encouraged, especially among those who work and live at a distance from campus. The chief tactics employed are: Development of software that can be customized to meet the varying needs of individual students, courses, and instructors. Use of simulations that, while not replacing laboratory bench experiments, allow students to practice important laboratory techniques and observe the physical behavior of chemical systems. Use of software that allows students to explore the molecular basis of chemical phenomena. Use of software that allows students to display and analyze data in ways that facilitate drawing general conclusions about the quantitative relationships between observable properties. Use of the computer as a communications device. The ability to customize software is important in adapting to different learning styles and in encouraging students to learn by discovery. For example, TitrationLab was developed so that the material may merely be presented empirically or in ways in which the principles of equilibrium are demonstrated. At the advanced level, automatically generated titration curves are used to

  11. Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision?

    OpenAIRE

    Kruger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodriguez-Sanchez, Antonio J.; Wiskott, Laurenz

    2013-01-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition or vision-based navigation and manipulation. This article reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer ...

  12. Neutron visual sensing techniques making good use of computer science

    International Nuclear Information System (INIS)

    Kureta, Masatoshi

    2009-01-01

    Neutron visual sensing technique is one of the nondestructive visualization and image-sensing techniques. In this article, some advanced neutron visual sensing techniques are introduced. The most up-to-date high-speed neutron radiography, neutron 3D CT, high-speed scanning neutron 3D/4D CT and multi-beam neutron 4D CT techniques are included with some fundamental application results. Oil flow in a car engine was visualized by high-speed neutron radiography technique to make clear the unknown phenomena. 4D visualization of pained sand in the sand glass was reported as the demonstration of the high-speed scanning neutron 4D CT technique. The purposes of the development of these techniques are to make clear the unknown phenomena and to measure the void fraction, velocity etc. with high-speed or 3D/4D for many industrial applications. (author)

  13. A Qualitative Study of Students' Computational Thinking Skills in a Data-Driven Computing Class

    Science.gov (United States)

    Yuen, Timothy T.; Robbins, Kay A.

    2014-01-01

    Critical thinking, problem solving, the use of tools, and the ability to consume and analyze information are important skills for the 21st century workforce. This article presents a qualitative case study that follows five undergraduate biology majors in a computer science course (CS0). This CS0 course teaches programming within a data-driven…

  14. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Science.gov (United States)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  15. Modern data-driven decision support systems: the role of computing with words and computational linguistics

    Science.gov (United States)

    Kacprzyk, Janusz; Zadrożny, Sławomir

    2010-05-01

    We present how the conceptually and numerically simple concept of a fuzzy linguistic database summary can be a very powerful tool for gaining much insight into the very essence of data. The use of linguistic summaries provides tools for the verbalisation of data analysis (mining) results which, in addition to the more commonly used visualisation, e.g. via a graphical user interface, can contribute to an increased human consistency and ease of use, notably for supporting decision makers via the data-driven decision support system paradigm. Two new relevant aspects of the analysis are also outlined which were first initiated by the authors. First, following Kacprzyk and Zadrożny, it is further considered how linguistic data summarisation is closely related to some types of solutions used in natural language generation (NLG). This can make it possible to use more and more effective and efficient tools and techniques developed in NLG. Second, similar remarks are given on relations to systemic functional linguistics. Moreover, following Kacprzyk and Zadrożny, comments are given on an extremely relevant aspect of scalability of linguistic summarisation of data, using a new concept of a conceptual scalability.

  16. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  17. Big Data: An Opportunity for Collaboration with Computer Scientists on Data-Driven Science

    Science.gov (United States)

    Baru, C.

    2014-12-01

    Big data technologies are evolving rapidly, driven by the need to manage ever increasing amounts of historical data; process relentless streams of human and machine-generated data; and integrate data of heterogeneous structure from extremely heterogeneous sources of information. Big data is inherently an application-driven problem. Developing the right technologies requires an understanding of the applications domain. Though, an intriguing aspect of this phenomenon is that the availability of the data itself enables new applications not previously conceived of! In this talk, we will discuss how the big data phenomenon creates an imperative for collaboration among domain scientists (in this case, geoscientists) and computer scientists. Domain scientists provide the application requirements as well as insights about the data involved, while computer scientists help assess whether problems can be solved with currently available technologies or require adaptaion of existing technologies and/or development of new technologies. The synergy can create vibrant collaborations potentially leading to new science insights as well as development of new data technologies and systems. The area of interface between geosciences and computer science, also referred to as geoinformatics is, we believe, a fertile area for interdisciplinary research.

  18. Computer-generated video fly-through: an aid to visual impact assessment for windfarms

    International Nuclear Information System (INIS)

    Neilson, G.; Leeming, T.; Hall, S.

    1998-01-01

    Computer generated video fly-through provides a new method of assessing the visual impact of wind farms. With a PC, software and digital terrain model of the wind farm it is possible to produce videos ranging from wireframe to realistically shaded models. Using computer generated video fly-through visually sensitive corridors can be explored fully, wind turbine rotors can be seen in motion, critical viewpoints can be identified for photomontages and the context of the wind farm appreciated better. This paper describes the techniques of computer generated video fly through and examines its various applications in visual impact assessment of wind farms. (Author)

  19. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  20. Spontaneous and visually-driven high-frequency oscillations in the occipital cortex: Intracranial recording in epileptic patients

    Science.gov (United States)

    Nagasawa, Tetsuro; Juhász, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Sood, Sandeep; Asano, Eishi

    2011-01-01

    SUMMARY High-frequency oscillations (HFOs) at ≧80 Hz of nonepileptic nature spontaneously emerge from human cerebral cortex. In 10 patients with extra-occipital lobe epilepsy, we compared the spectral-spatial characteristics of HFOs spontaneously arising from the nonepileptic occipital cortex with those of HFOs driven by a visual task as well as epileptogenic HFOs arising from the extra-occipital seizure focus. We identified spontaneous HFOs at ≧80 Hz with a mean duration of 330 msec intermittently emerging from the occipital cortex during interictal slow-wave sleep. The spectral frequency band of spontaneous occipital HFOs was similar to that of visually-driven HFOs. Spontaneous occipital HFOs were spatially sparse and confined to smaller areas, whereas visually-driven HFOs involved the larger areas including the more rostral sites. Neither spectral frequency band nor amplitude of spontaneous occipital HFOs significantly differed from those of epileptogenic HFOs. Spontaneous occipital HFOs were strongly locked to the phase of delta activity, but the strength of delta-phase coupling decayed from 1 to 3 Hz. Conversely, epileptogenic extra-occipital HFOs were locked to the phase of delta activity about equally in the range from 1 to 3 Hz. The occipital cortex spontaneously generates physiological HFOs which may stand out on electrocorticography traces as prominently as pathological HFOs arising from elsewhere; this observation should be taken into consideration during presurgical evaluation. Coupling of spontaneous delta and HFOs may increase the understanding of significance of delta-oscillations during slow-wave sleep. Further studies are warranted to determine whether delta-phase coupling distinguishes physiological from pathological HFOs or simply differs across anatomical locations. PMID:21432945

  1. Interactive visualization of Earth and Space Science computations

    Science.gov (United States)

    Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise

    1994-01-01

    Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.

  2. Computer systems and methods for the query and visualization of multidimensional databases

    Science.gov (United States)

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  3. Computer systems and methods for the query and visualization of multidimensional databases

    Science.gov (United States)

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  4. An Interactive Platform to Visualize Data-Driven Clinical Pathways for the Management of Multiple Chronic Conditions.

    Science.gov (United States)

    Zhang, Yiye; Padman, Rema

    2017-01-01

    Patients with multiple chronic conditions (MCC) pose an increasingly complex health management challenge worldwide, particularly due to the significant gap in our understanding of how to provide coordinated care. Drawing on our prior research on learning data-driven clinical pathways from actual practice data, this paper describes a prototype, interactive platform for visualizing the pathways of MCC to support shared decision making. Created using Python web framework, JavaScript library and our clinical pathway learning algorithm, the visualization platform allows clinicians and patients to learn the dominant patterns of co-progression of multiple clinical events from their own data, and interactively explore and interpret the pathways. We demonstrate functionalities of the platform using a cluster of 36 patients, identified from a dataset of 1,084 patients, who are diagnosed with at least chronic kidney disease, hypertension, and diabetes. Future evaluation studies will explore the use of this platform to better understand and manage MCC.

  5. A novel role for visual perspective cues in the neural computation of depth.

    Science.gov (United States)

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  6. Efficacy of brain-computer interface-driven neuromuscular electrical stimulation for chronic paresis after stroke.

    Science.gov (United States)

    Mukaino, Masahiko; Ono, Takashi; Shindo, Keiichiro; Fujiwara, Toshiyuki; Ota, Tetsuo; Kimura, Akio; Liu, Meigen; Ushiba, Junichi

    2014-04-01

    Brain computer interface technology is of great interest to researchers as a potential therapeutic measure for people with severe neurological disorders. The aim of this study was to examine the efficacy of brain computer interface, by comparing conventional neuromuscular electrical stimulation and brain computer interface-driven neuromuscular electrical stimulation, using an A-B-A-B withdrawal single-subject design. A 38-year-old male with severe hemiplegia due to a putaminal haemorrhage participated in this study. The design involved 2 epochs. In epoch A, the patient attempted to open his fingers during the application of neuromuscular electrical stimulation, irrespective of his actual brain activity. In epoch B, neuromuscular electrical stimulation was applied only when a significant motor-related cortical potential was observed in the electroencephalogram. The subject initially showed diffuse functional magnetic resonance imaging activation and small electro-encephalogram responses while attempting finger movement. Epoch A was associated with few neurological or clinical signs of improvement. Epoch B, with a brain computer interface, was associated with marked lateralization of electroencephalogram (EEG) and blood oxygenation level dependent responses. Voluntary electromyogram (EMG) activity, with significant EEG-EMG coherence, was also prompted. Clinical improvement in upper-extremity function and muscle tone was observed. These results indicate that self-directed training with a brain computer interface may induce activity- dependent cortical plasticity and promote functional recovery. This preliminary clinical investigation encourages further research using a controlled design.

  7. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  8. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    2000-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  9. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1999-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  10. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1998-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  11. Driven-Walking for Visually Impaired/Blind People through WiMAX

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2010-03-01

    Full Text Available It is known that people who are blind/visually impaired find it difficult to move, especially in unknown places. Usually the only help they have is their walking stick (white cane, a guide dog and sometimes special warning sounds or road signals at specific positions. Material and Method: In this paper we are trying to find a solution on how to build an appropriate navigating system for blind people. Results: Based on benefits of powerful properties of mobile WiMAX standard we suggest an important navigate application which can translate a digital visual environment properly for blind/visually impaired users through a plethora of combinations such as voice, brain or tongue signals. Conclusions: We believe that such an idea will be an initial point for a plethora of applications which will eliminate walking disabilities of blind/visually people.

  12. Computing with Connections in Visual Recognition of Origami Objects.

    Science.gov (United States)

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  13. Use Patterns of Visual Cues in Computer-Mediated Communication

    Science.gov (United States)

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…

  14. Learning Disabilities and the Auditory and Visual Matching Computer Program

    Science.gov (United States)

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  15. The visual simulators for architecture and computer organization learning

    OpenAIRE

    Nikolić Boško; Grbanović Nenad; Đorđević Jovan

    2009-01-01

    The paper proposes a method of an effective distance learning of architecture and computer organization. The proposed method is based on a software system that is possible to be applied in any course in this field. Within this system students are enabled to observe simulation of already created computer systems. The system provides creation and simulation of switch systems, too.

  16. The Role of Visualization in Computer Science Education

    Science.gov (United States)

    Fouh, Eric; Akbar, Monika; Shaffer, Clifford A.

    2012-01-01

    Computer science core instruction attempts to provide a detailed understanding of dynamic processes such as the working of an algorithm or the flow of information between computing entities. Such dynamic processes are not well explained by static media such as text and images, and are difficult to convey in lecture. The authors survey the history…

  17. Heat-driven liquid metal cooling device for the thermal management of a computer chip

    Energy Technology Data Exchange (ETDEWEB)

    Ma Kunquan; Liu Jing [Cryogenic Laboratory, PO Box 2711, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Beijing 100080 (China)

    2007-08-07

    The tremendous heat generated in a computer chip or very large scale integrated circuit raises many challenging issues to be solved. Recently, liquid metal with a low melting point was established as the most conductive coolant for efficiently cooling the computer chip. Here, by making full use of the double merits of the liquid metal, i.e. superior heat transfer performance and electromagnetically drivable ability, we demonstrate for the first time the liquid-cooling concept for the thermal management of a computer chip using waste heat to power the thermoelectric generator (TEG) and thus the flow of the liquid metal. Such a device consumes no external net energy, which warrants it a self-supporting and completely silent liquid-cooling module. Experiments on devices driven by one or two stage TEGs indicate that a dramatic temperature drop on the simulating chip has been realized without the aid of any fans. The higher the heat load, the larger will be the temperature decrease caused by the cooling device. Further, the two TEGs will generate a larger current if a copper plate is sandwiched between them to enhance heat dissipation there. This new method is expected to be significant in future thermal management of a desk or notebook computer, where both efficient cooling and extremely low energy consumption are of major concern.

  18. Heat-driven liquid metal cooling device for the thermal management of a computer chip

    International Nuclear Information System (INIS)

    Ma Kunquan; Liu Jing

    2007-01-01

    The tremendous heat generated in a computer chip or very large scale integrated circuit raises many challenging issues to be solved. Recently, liquid metal with a low melting point was established as the most conductive coolant for efficiently cooling the computer chip. Here, by making full use of the double merits of the liquid metal, i.e. superior heat transfer performance and electromagnetically drivable ability, we demonstrate for the first time the liquid-cooling concept for the thermal management of a computer chip using waste heat to power the thermoelectric generator (TEG) and thus the flow of the liquid metal. Such a device consumes no external net energy, which warrants it a self-supporting and completely silent liquid-cooling module. Experiments on devices driven by one or two stage TEGs indicate that a dramatic temperature drop on the simulating chip has been realized without the aid of any fans. The higher the heat load, the larger will be the temperature decrease caused by the cooling device. Further, the two TEGs will generate a larger current if a copper plate is sandwiched between them to enhance heat dissipation there. This new method is expected to be significant in future thermal management of a desk or notebook computer, where both efficient cooling and extremely low energy consumption are of major concern

  19. Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results

    Energy Technology Data Exchange (ETDEWEB)

    Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve

    2005-09-01

    Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.

  20. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  1. Knowledge-driven computational modeling in Alzheimer's disease research: Current state and future trends.

    Science.gov (United States)

    Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J

    2017-11-01

    Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to

  2. Visualization of hierarchically structured information for human-computer interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Suh Hyun; Lee, J. K.; Choi, I. K.; Kye, S. C.; Lee, N. K. [Dongguk University, Seoul (Korea)

    2001-11-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

  3. Visualization and simulation of density driven convection in porous media using magnetic resonance imaging

    Science.gov (United States)

    Montague, James A.; Pinder, George F.; Gonyea, Jay V.; Hipko, Scott; Watts, Richard

    2018-05-01

    Magnetic resonance imaging is used to observe solute transport in a 40 cm long, 26 cm diameter sand column that contained a central core of low permeability silica surrounded by higher permeability well-sorted sand. Low concentrations (2.9 g/L) of Magnevist, a gadolinium based contrast agent, produce density driven convection within the column when it starts in an unstable state. The unstable state, for this experiment, exists when higher density contrast agent is present above the lower density water. We implement a numerical model in OpenFOAM to reproduce the observed fluid flow and transport from a density difference of 0.3%. The experimental results demonstrate the usefulness of magnetic resonance imaging in observing three-dimensional gravity-driven convective-dispersive transport behaviors in medium scale experiments.

  4. Ocean Modeling and Visualization on Massively Parallel Computer

    Science.gov (United States)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  5. Computer-aided visualization and analysis system for sequence evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  6. Resisting the Lure of Technology-Driven Design: Pedagogical Approaches to Visual Communication

    Science.gov (United States)

    Northcut, Kathryn M.; Brumberger, Eva R.

    2010-01-01

    Technical communicators are expected to work extensively with visual texts in workplaces. Fortunately, most academic curricula include courses in which the skills necessary for such tasks are introduced and sometimes developed in depth. We identify a tension between a focus on technological skill vs. a focus on principles and theory, arguing that…

  7. The use of computer graphics in the visual analysis of the proposed Sunshine Ski Area expansion

    Science.gov (United States)

    Mark Angelo

    1979-01-01

    This paper describes the use of computer graphics in designing part of the Sunshine Ski Area in Banff National Park. The program used was capable of generating perspective landscape drawings from a number of different viewpoints. This allowed managers to predict, and subsequently reduce, the adverse visual impacts of ski-run development. Computer graphics have proven,...

  8. Peripheral visual feedback: a powerful means of supporting effective attention allocation in event-driven, data-rich environments.

    Science.gov (United States)

    Nikolic, M I; Sarter, N B

    2001-01-01

    Breakdowns in human-automation coordination in data-rich, event-driven domains such as aviation can be explained in part by a mismatch between the high degree of autonomy yet low observability of modern technology. To some extent, the latter is the result of an increasing reliance in feedback design on foveal vision--an approach that fails to support pilots in tracking system-induced changes and events in parallel with performing concurrent flight-related tasks. One possible solution to the problem is the distribution of tasks and information across sensory modalities and processing channels. A simulator study is presented that compared the effectiveness of current foveal feedback and two implementations of peripheral visual feedback for keeping pilots informed about uncommanded changes in the status of an automated cockpit system. Both peripheral visual displays resulted in higher detection rates and faster response times, without interfering with the performance of concurrent visual tasks any more than does currently available automation feedback. Potential applications include improved display designs that support effective attention allocation in a variety of complex dynamic environments, such as aviation, process control, and medicine.

  9. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  10. Multi-scale data visualization for computational astrophysics and climate dynamics at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Ahern, Sean; Daniel, Jamison R; Gao, Jinzhu; Ostrouchov, George; Toedte, Ross J; Wang, Chaoli

    2006-01-01

    Computational astrophysics and climate dynamics are two principal application foci at the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). We identify a dataset frontier that is shared by several SciDAC computational science domains and present an exploration of traditional production visualization techniques enhanced with new enabling research technologies such as advanced parallel occlusion culling and high resolution small multiples statistical analysis. In collaboration with our research partners, these techniques will allow the visual exploration of a new generation of peta-scale datasets that cross this data frontier along all axes

  11. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  12. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture

    OpenAIRE

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-01-01

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an ?irrelevant-change distracting effect?, where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants? processing manner, lea...

  13. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  14. Data-flow oriented visual programming libraries for scientific computing

    NARCIS (Netherlands)

    Maubach, J.M.L.; Drenth, W.D.; Sloot, P.M.A.

    2002-01-01

    The growing release of scientific computational software does not seem to aid the implementation of complex numerical algorithms. Released libraries lack a common standard interface with regard to for instance finite element, difference or volume discretizations. And, libraries written in standard

  15. Computed Tomography-Enhanced Anatomy Course Using Enterprise Visualization

    Science.gov (United States)

    May, Hila; Cohen, Haim; Medlej, Bahaa; Kornreich, Liora; Peled, Nathan; Hershkovitz, Israel

    2013-01-01

    Rapid changes in medical knowledge are forcing continuous adaptation of the basic science courses in medical schools. This article discusses a three-year experience developing a new Computed Tomography (CT)-based anatomy curriculum at the Sackler School of Medicine, Tel Aviv University, including describing the motivations and reasoning for the…

  16. Computer-aided visualization of database structural relationships

    International Nuclear Information System (INIS)

    Cahn, D.F.

    1980-04-01

    Interactive computer graphic displays can be extremely useful in augmenting understandability of data structures. In complexly interrelated domains such as bibliographic thesauri and energy information systems, node and link displays represent one such tool. This paper presents examples of data structure representations found useful in these domains and discusses some of their generalizable components. 2 figures

  17. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  18. Live-cell visualization of gasdermin D-driven pyroptotic cell death.

    Science.gov (United States)

    Rathkey, Joseph K; Benson, Bryan L; Chirieleison, Steven M; Yang, Jie; Xiao, Tsan S; Dubyak, George R; Huang, Alex Y; Abbott, Derek W

    2017-09-01

    Pyroptosis is a form of cell death important in defenses against pathogens that can also result in a potent and sometimes pathological inflammatory response. During pyroptosis, GSDMD (gasdermin D), the pore-forming effector protein, is cleaved, forms oligomers, and inserts into the membranes of the cell, resulting in rapid cell death. However, the potent cell death induction caused by GSDMD has complicated our ability to understand the biology of this protein. Studies aimed at visualizing GSDMD have relied on expression of GSDMD fragments in epithelial cell lines that naturally lack GSDMD expression and also lack the proteases necessary to cleave GSDMD. In this work, we performed mutagenesis and molecular modeling to strategically place tags and fluorescent proteins within GSDMD that support native pyroptosis and facilitate live-cell imaging of pyroptotic cell death. Here, we demonstrate that these fusion proteins are cleaved by caspases-1 and -11 at Asp-276. Mutations that disrupted the predicted p30-p20 autoinhibitory interface resulted in GSDMD aggregation, supporting the oligomerizing activity of these mutations. Furthermore, we show that these novel GSDMD fusions execute inflammasome-dependent pyroptotic cell death in response to multiple stimuli and allow for visualization of the morphological changes associated with pyroptotic cell death in real time. This work therefore provides new tools that not only expand the molecular understanding of pyroptosis but also enable its direct visualization. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  19. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    Science.gov (United States)

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  20. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    Science.gov (United States)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them

  1. Teaching ocean wave forecasting using computer-generated visualization and animation—Part 1: sea forecasting

    Science.gov (United States)

    Whitford, Dennis J.

    2002-05-01

    Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.

  2. Automated Quantitative Computed Tomography Versus Visual Computed Tomography Scoring in Idiopathic Pulmonary Fibrosis: Validation Against Pulmonary Function.

    Science.gov (United States)

    Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Kokosi, Maria; Nair, Arjun; Karwoski, Ronald; Raghunath, Sushravya M; Walsh, Simon L F; Wells, Athol U; Hansell, David M

    2016-09-01

    The aim of the study was to determine whether a novel computed tomography (CT) postprocessing software technique (CALIPER) is superior to visual CT scoring as judged by functional correlations in idiopathic pulmonary fibrosis (IPF). A total of 283 consecutive patients with IPF had CT parenchymal patterns evaluated quantitatively with CALIPER and by visual scoring. These 2 techniques were evaluated against: forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), diffusing capacity for carbon monoxide (DLco), carbon monoxide transfer coefficient (Kco), and a composite physiological index (CPI), with regard to extent of interstitial lung disease (ILD), extent of emphysema, and pulmonary vascular abnormalities. CALIPER-derived estimates of ILD extent demonstrated stronger univariate correlations than visual scores for most pulmonary function tests (PFTs): (FEV1: CALIPER R=0.29, visual R=0.18; FVC: CALIPER R=0.41, visual R=0.27; DLco: CALIPER R=0.31, visual R=0.35; CPI: CALIPER R=0.48, visual R=0.44). Correlations between CT measures of emphysema extent and PFTs were weak and did not differ significantly between CALIPER and visual scoring. Intriguingly, the pulmonary vessel volume provided similar correlations to total ILD extent scored by CALIPER for FVC, DLco, and CPI (FVC: R=0.45; DLco: R=0.34; CPI: R=0.53). CALIPER was superior to visual scoring as validated by functional correlations with PFTs. The pulmonary vessel volume, a novel CALIPER CT parameter with no visual scoring equivalent, has the potential to be a CT feature in the assessment of patients with IPF and requires further exploration.

  3. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  4. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    Science.gov (United States)

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical

  5. Deep Learning in Visual Computing and Signal Processing

    OpenAIRE

    Xie, Danfeng; Zhang, Lei; Bai, Li

    2017-01-01

    Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply...

  6. Computing Science and Statistics: Volume 24. Graphics and Visualization

    Science.gov (United States)

    1993-03-20

    Models Mike West Institute of Statistics & Decision Sciences Duke University, Durham NC 27708, USA Abstract density estimation techniques. With an...ratio-of-uniforms halter, D. J., Best, N. G., McNeil, A. method. Statistics and Computing, 1, (in J., Sharples , L. D. and Kirby, A. J. press). (1992b...Dept of Act. Math & Stats Box 13040 SFA Riccarton Edinburgh, Scotland EH 14 4AS Nacognoches, TX 75962 mike @cara.ma.hw.ac.uk Allen McIntosh Michael T

  7. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    Science.gov (United States)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization

  8. User Driven Data Mining, Visualization and Decision Making for NOAA Observing System and Data Investments

    Science.gov (United States)

    Austin, M.

    2016-12-01

    The National Oceanic and Atmospheric Administration (NOAA) observing system enterprise represents a $2.4B annual investment. Earth observations from these systems are foundational to NOAA's mission to describe, understand, and predict the Earth's environment. NOAA's decision makers are charged with managing this complex portfolio of observing systems to serve the national interest effectively and efficiently. The Technology Planning & Integration for Observation (TPIO) Office currently maintains an observing system portfolio for NOAA's validated user observation requirements, observing capabilities, and resulting data products and services. TPIO performs data analytics to provide NOAA leadership business case recommendations for making sound budgetary decisions. Over the last year, TPIO has moved from massive spreadsheets to intuitive dashboards that enable Federal agencies as well as the general public the ability to explore user observation requirements and environmental observing systems that monitor and predict changes in the environment. This change has led to an organizational data management shift to analytics and visualizations by allowing analysts more time to focus on understanding the data, discovering insights, and effectively communicating the information to decision makers. Moving forward, the next step is to facilitate a cultural change toward self-serve data sharing across NOAA, other Federal agencies, and the public using intuitive data visualizations that answer relevant business questions for users of NOAA's Observing System Enterprise. Users and producers of environmental data will become aware of the need for enhancing communication to simplify information exchange to achieve multipurpose goals across a variety of disciplines. NOAA cannot achieve its goal of producing environmental intelligence without data that can be shared by multiple user communities. This presentation will describe where we are on this journey and will provide examples of

  9. Arena3D: visualizing time-driven phenotypic differences in biological systems

    Directory of Open Access Journals (Sweden)

    Secrier Maria

    2012-03-01

    Full Text Available Abstract Background Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes. Results Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene lsm14a in cytokinesis is suggested. We also show how phenotypic

  10. Explicet: graphical user interface software for metadata-driven management, analysis and visualization of microbiome data.

    Science.gov (United States)

    Robertson, Charles E; Harris, J Kirk; Wagner, Brandie D; Granger, David; Browne, Kathy; Tatem, Beth; Feazel, Leah M; Park, Kristin; Pace, Norman R; Frank, Daniel N

    2013-12-01

    Studies of the human microbiome, and microbial community ecology in general, have blossomed of late and are now a burgeoning source of exciting research findings. Along with the advent of next-generation sequencing platforms, which have dramatically increased the scope of microbiome-related projects, several high-performance sequence analysis pipelines (e.g. QIIME, MOTHUR, VAMPS) are now available to investigators for microbiome analysis. The subject of our manuscript, the graphical user interface-based Explicet software package, fills a previously unmet need for a robust, yet intuitive means of integrating the outputs of the software pipelines with user-specified metadata and then visualizing the combined data.

  11. Detecting Distributed Scans Using High-Performance Query-DrivenVisualization

    Energy Technology Data Exchange (ETDEWEB)

    Stockinger, Kurt; Bethel, E. Wes; Campbell, Scott; Dart, Eli; Wu,Kesheng

    2006-09-01

    Modern forensic analytics applications, like network trafficanalysis, perform high-performance hypothesis testing, knowledgediscovery and data mining on very large datasets. One essential strategyto reduce the time required for these operations is to select only themost relevant data records for a given computation. In this paper, wepresent a set of parallel algorithms that demonstrate how an efficientselection mechanism -- bitmap indexing -- significantly speeds up acommon analysist ask, namely, computing conditional histogram on verylarge datasets. We present a thorough study of the performancecharacteristics of the parallel conditional histogram algorithms. Asacase study, we compute conditional histograms for detecting distributedscans hidden in a dataset consisting of approximately 2.5 billion networkconnection records. We show that these conditional histograms can becomputed on interactive timescale (i.e., in seconds). We also show how toprogressively modify the selection criteria to narrow the analysis andfind the sources of the distributed scans.

  12. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    Science.gov (United States)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  13. Direct Visualization of Barrier Crossing Dynamics in a Driven Optical Matter System.

    Science.gov (United States)

    Figliozzi, Patrick; Peterson, Curtis W; Rice, Stuart A; Scherer, Norbert F

    2018-04-25

    A major impediment to a more complete understanding of barrier crossing and other single-molecule processes is the inability to directly visualize the trajectories and dynamics of atoms and molecules in reactions. Rather, the kinetics are inferred from ensemble measurements or the position of a transducer ( e. g., an AFM cantilever) as a surrogate variable. Direct visualization is highly desirable. Here, we achieve the direct measurement of barrier crossing trajectories by using optical microscopy to observe position and orientation changes of pairs of Ag nanoparticles, i. e. passing events, in an optical ring trap. A two-step mechanism similar to a bimolecular exchange reaction or the Michaelis-Menten scheme is revealed by analysis that combines detailed knowledge of each trajectory, a statistically significant number of repetitions of the passing events, and the driving force dependence of the process. We find that while the total event rate increases with driving force, this increase is due to an increase in the rate of encounters. There is no drive force dependence on the rate of barrier crossing because the key motion for the process involves a random (thermal) radial fluctuation of one particle allowing the other to pass. This simple experiment can readily be extended to study more complex barrier crossing processes by replacing the spherical metal nanoparticles with anisotropic ones or by creating more intricate optical trapping potentials.

  14. Effect of yoga on self-rated visual discomfort in computer users.

    Science.gov (United States)

    Telles, Shirley; Naveen, K V; Dash, Manoj; Deginal, Rajendra; Manjunath, N K

    2006-12-03

    'Dry eye' appears to be the main contributor to the symptoms of computer vision syndrome. Regular breaks and the use of artificial tears or certain eye drops are some of the options to reduce visual discomfort. A combination of yoga practices have been shown to reduce visual strain in persons with progressive myopia. The present randomized controlled trial was planned to evaluate the effect of a combination of yoga practices on self-rated symptoms of visual discomfort in professional computer users in Bangalore. Two hundred and ninety one professional computer users were randomly assigned to two groups, yoga (YG, n = 146) and wait list control (WL, n = 145). Both groups were assessed at baseline and after sixty days for self-rated visual discomfort using a standard questionnaire. During these 60 days the YG group practiced an hour of yoga daily for five days in a week and the WL group did their usual recreational activities also for an hour daily for the same duration. At 60 days there were 62 in the YG group and 55 in the WL group. While the scores for visual discomfort of both groups were comparable at baseline, after 60 days there was a significantly decreased score in the YG group, whereas the WL group showed significantly increased scores. The results suggest that the yoga practice appeared to reduce visual discomfort, while the group who had no yoga intervention (WL) showed an increase in discomfort at the end of sixty days.

  15. Effect of yoga on self-rated visual discomfort in computer users

    Directory of Open Access Journals (Sweden)

    Deginal Rajendra

    2006-12-01

    Full Text Available Abstract Background 'Dry eye' appears to be the main contributor to the symptoms of computer vision syndrome. Regular breaks and the use of artificial tears or certain eye drops are some of the options to reduce visual discomfort. A combination of yoga practices have been shown to reduce visual strain in persons with progressive myopia. The present randomized controlled trial was planned to evaluate the effect of a combination of yoga practices on self-rated symptoms of visual discomfort in professional computer users in Bangalore. Methods Two hundred and ninety one professional computer users were randomly assigned to two groups, yoga (YG, n = 146 and wait list control (WL, n = 145. Both groups were assessed at baseline and after sixty days for self-rated visual discomfort using a standard questionnaire. During these 60 days the YG group practiced an hour of yoga daily for five days in a week and the WL group did their usual recreational activities also for an hour daily for the same duration. At 60 days there were 62 in the YG group and 55 in the WL group. Results While the scores for visual discomfort of both groups were comparable at baseline, after 60 days there was a significantly decreased score in the YG group, whereas the WL group showed significantly increased scores. Conclusion The results suggest that the yoga practice appeared to reduce visual discomfort, while the group who had no yoga intervention (WL showed an increase in discomfort at the end of sixty days.

  16. Visual Soccer Analytics: Understanding the Characteristics of Collective Team Movement Based on Feature-Driven Analysis and Abstraction

    Directory of Open Access Journals (Sweden)

    Manuel Stein

    2015-10-01

    Full Text Available With recent advances in sensor technologies, large amounts of movement data have become available in many application areas. A novel, promising application is the data-driven analysis of team sport. Specifically, soccer matches comprise rich, multivariate movement data at high temporal and geospatial resolution. Capturing and analyzing complex movement patterns and interdependencies between the players with respect to various characteristics is challenging. So far, soccer experts manually post-analyze game situations and depict certain patterns with respect to their experience. We propose a visual analysis system for interactive identification of soccer patterns and situations being of interest to the analyst. Our approach builds on a preliminary system, which is enhanced by semantic features defined together with a soccer domain expert. The system includes a range of useful visualizations to show the ranking of features over time and plots the change of game play situations, both helping the analyst to interpret complex game situations. A novel workflow includes improving the analysis process by a learning stage, taking into account user feedback. We evaluate our approach by analyzing real-world soccer matches, illustrate several use cases and collect additional expert feedback. The resulting findings are discussed with subject matter experts.

  17. Teaching ocean wave forecasting using computer-generated visualization and animation—Part 2: swell forecasting

    Science.gov (United States)

    Whitford, Dennis J.

    2002-05-01

    This paper, the second of a two-part series, introduces undergraduate students to ocean wave forecasting using interactive computer-generated visualization and animation. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Fortunately, the introduction of computers in the geosciences provides a tool for addressing this problem. Computer-generated visualization and animation, accompanied by oral explanation, have been shown to be a pedagogical improvement to more traditional methods of instruction. Cartographic science and other disciplines using geographical information systems have been especially aggressive in pioneering the use of visualization and animation, whereas oceanography has not. This paper will focus on the teaching of ocean swell wave forecasting, often considered a difficult oceanographic topic due to the mathematics and physics required, as well as its interdependence on time and space. Several MATLAB ® software programs are described and offered to visualize and animate group speed, frequency dispersion, angular dispersion, propagation, and wave height forecasting of deep water ocean swell waves. Teachers may use these interactive visualizations and animations without requiring an extensive background in computer programming.

  18. The pedagogical toolbox: computer-generated visual displays, classroom demonstration, and lecture.

    Science.gov (United States)

    Bockoven, Jerry

    2004-06-01

    This analogue study compared the effectiveness of computer-generated visual displays, classroom demonstration, and traditional lecture as methods of instruction used to teach neuronal structure and processes. Randomly assigned 116 undergraduate students participated in 1 of 3 classrooms in which they experienced the same content but different teaching approaches presented by 3 different student-instructors. Then participants completed a survey of their subjective reactions and a measure of factual information designed to evaluate objective learning outcomes. Participants repeated this factual measure 5 wk. later. Results call into question the use of classroom demonstration methods as well as the trend towards devaluing traditional lecture in favor of computer-generated visual display.

  19. FAST - A multiprocessed environment for visualization of computational fluid dynamics

    International Nuclear Information System (INIS)

    Bancroft, G.V.; Merritt, F.J.; Plessel, T.C.; Kelaita, P.G.; Mccabe, R.K.

    1991-01-01

    The paper presents the Flow Analysis Software Toolset (FAST) to be used for fluid-mechanics analysis. The design criteria for FAST including the minimization of the data path in the computational fluid-dynamics (CFD) process, consistent user interface, extensible software architecture, modularization, and the isolation of three-dimensional tasks from the application programmer are outlined. Each separate process communicates through the FAST Hub, while other modules such as FAST Central, NAS file input, CFD calculator, surface extractor and renderer, titler, tracer, and isolev might work together to generate the scene. An interprocess communication package making it possible for FAST to operate as a modular environment where resources could be shared among different machines as well as a single host is discussed. 20 refs

  20. Brain-computer interface based on generation of visual images.

    Directory of Open Access Journals (Sweden)

    Pavel Bobrov

    Full Text Available This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP classifier.

  1. A Visualization Review of Cloud Computing Algorithms in the Last Decade

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2016-10-01

    Full Text Available Cloud computing has competitive advantages—such as on-demand self-service, rapid computing, cost reduction, and almost unlimited storage—that have attracted extensive attention from both academia and industry in recent years. Some review works have been reported to summarize extant studies related to cloud computing, but few analyze these studies based on the citations. Co-citation analysis can provide scholars a strong support to identify the intellectual bases and leading edges of a specific field. In addition, advanced algorithms, which can directly affect the availability, efficiency, and security of cloud computing, are the key to conducting computing across various clouds. Motivated by these observations, we conduct a specific visualization review of the studies related to cloud computing algorithms using one mainstream co-citation analysis tool—CiteSpace. The visualization results detect the most influential studies, journals, countries, institutions, and authors on cloud computing algorithms and reveal the intellectual bases and focuses of cloud computing algorithms in the literature, providing guidance for interested researchers to make further studies on cloud computing algorithms.

  2. Visual and psychological stress during computer work in healthy, young females-physiological responses.

    Science.gov (United States)

    Mork, Randi; Falkenberg, Helle K; Fostervold, Knut Inge; Thorud, Hanne Mari S

    2018-05-30

    Among computer workers, visual complaints, and neck pain are highly prevalent. This study explores how occupational simulated stressors during computer work, like glare and psychosocial stress, affect physiological responses in young females with normal vision. The study was a within-subject laboratory experiment with a counterbalanced, repeated design. Forty-three females performed four 10-min computer-work sessions with different stress exposures: (1) minimal stress; (2) visual stress (direct glare); (3) psychological stress; and (4) combined visual and psychological stress. Muscle activity and muscle blood flow in trapezius, muscle blood flow in orbicularis oculi, heart rate, blood pressure, blink rate and postural angles were continuously recorded. Immediately after each computer-work session, fixation disparity was measured and a questionnaire regarding perceived workstation lighting and stress was completed. Exposure to direct glare resulted in increased trapezius muscle blood flow, increased blink rate, and forward bending of the head. Psychological stress induced a transient increase in trapezius muscle activity and a more forward-bent posture. Bending forward towards the computer screen was correlated with higher productivity (reading speed), indicating a concentration or stress response. Forward bent posture was also associated with changes in fixation disparity. Furthermore, during computer work per se, trapezius muscle activity and blood flow, orbicularis oculi muscle blood flow, and heart rate were increased compared to rest. Exposure to glare and psychological stress during computer work were shown to influence the trapezius muscle, posture, and blink rate in young, healthy females with normal binocular vision, but in different ways. Accordingly, both visual and psychological factors must be taken into account when optimizing computer workstations to reduce physiological responses that may cause excessive eyestrain and musculoskeletal load.

  3. Computational modeling of z-pinch-driven hohlraum experiments on Z

    International Nuclear Information System (INIS)

    Vesey, R.A.; Porter, J.L. Jr.; Cuneo, M.E.

    1999-01-01

    The high-yield inertial confinement fusion concept based on a double-ended z-pinch driven hohlraum tolerates the degree of spatial inhomogeneity present in z-pinch plasma radiation sources by utilizing a relatively large hohlraum wall surface to provide spatial smoothing of the radiation delivered to the fusion capsule. The z-pinch radiation sources are separated from the capsule by radial spoke arrays. Key physics issues for this concept are the behavior of the spoke array (effect on the z-pinch performance, x-ray transmission) and the uniformity of the radiation flux incident on the surface of the capsule. Experiments are underway on the Z accelerator at Sandia National laboratories to gain understanding of these issues in a single-sided drive geometry. These experiments seek to measure the radiation coupling among the z-pinch, source hohlraum, and secondary hohlraum, as well as the uniformity of the radiation flux striking a foam witness ball diagnostic positioned in the secondary hohlraum. This paper will present the results of computational modeling of various aspects of these experiments

  4. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  5. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    International Nuclear Information System (INIS)

    Bancroft, G.; Plessel, T.; Merritt, F.; Watson, V.

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers. 7 refs

  6. The Use of Robotics to Promote Computing to Pre-College Students with Visual Impairments

    Science.gov (United States)

    Ludi, Stephanie; Reichlmayr, Tom

    2011-01-01

    This article describes an outreach program to broaden participation in computing to include more students with visual impairments. The precollege workshops target students in grades 7-12 and engage students with robotics programming. The use of robotics at the precollege level has become popular in part due to the availability of Lego Mindstorm…

  7. Multivariate Gradient Analysis for Evaluating and Visualizing a Learning System Platform for Computer Programming

    Science.gov (United States)

    Mather, Richard

    2015-01-01

    This paper explores the application of canonical gradient analysis to evaluate and visualize student performance and acceptance of a learning system platform. The subject of evaluation is a first year BSc module for computer programming. This uses "Ceebot," an animated and immersive game-like development environment. Multivariate…

  8. A Computer Supported Teamwork Project for People with a Visual Impairment.

    Science.gov (United States)

    Hale, Greg

    2000-01-01

    Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)

  9. APPLICATION OF COMPUTER-AIDED TOMOGRAPHY TO VISUALIZE AND QUANTIFY BIOGENIC STRUCTURES IN MARINE SEDIMENTS

    Science.gov (United States)

    We used computer-aided tomography (CT) for 3D visualization and 2D analysis ofmarine sediment cores from 3 stations (at 10, 75 and 118 m depths) with different environmentalimpact. Biogenic structures such as tubes and burrows were quantified and compared among st...

  10. Understanding and Improving Blind Students' Access to Visual Information in Computer Science Education

    Science.gov (United States)

    Baker, Catherine M.

    2017-01-01

    Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter and prepares them for careers. However, computer science is typically taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand and decrease those barriers. The first…

  11. The Uses of Literacy in Studying Computer Games: Comparing Students' Oral and Visual Representations of Games

    Science.gov (United States)

    Pelletier, Caroline

    2005-01-01

    This paper compares the oral and visual representations which 12 to 13-year-old students produced in studying computer games as part of an English and Media course. It presents the arguments for studying multimodal texts as part of a literacy curriculum and then provides an overview of the games course devised by teachers and researchers. The…

  12. Computer-Aided design of belt and pulley systems using Visual Basic

    African Journals Online (AJOL)

    A Visual Basic Code “DriveCad” was developed for analysis and design of flat and V-belt drives. The Code was used to solve design problems and the results compared favorably with data generated by manual computat-ions, with variation of less than 1.6 %. DriveCad was used to generate scaled 2-dimensional drawings ...

  13. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    Science.gov (United States)

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  14. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  15. Using scattering theory to compute invariant manifolds and numerical results for the laser-driven Hénon-Heiles system.

    Science.gov (United States)

    Blazevski, Daniel; Franklin, Jennifer

    2012-12-01

    Scattering theory is a convenient way to describe systems that are subject to time-dependent perturbations which are localized in time. Using scattering theory, one can compute time-dependent invariant objects for the perturbed system knowing the invariant objects of the unperturbed system. In this paper, we use scattering theory to give numerical computations of invariant manifolds appearing in laser-driven reactions. In this setting, invariant manifolds separate regions of phase space that lead to different outcomes of the reaction and can be used to compute reaction rates.

  16. Lack of the sodium-driven chloride bicarbonate exchanger NCBE impairs visual function in the mouse retina.

    Directory of Open Access Journals (Sweden)

    Gerrit Hilgen

    Full Text Available Regulation of ion and pH homeostasis is essential for normal neuronal function. The sodium-driven chloride bicarbonate exchanger NCBE (Slc4a10, a member of the SLC4 family of bicarbonate transporters, uses the transmembrane gradient of sodium to drive cellular net uptake of bicarbonate and to extrude chloride, thereby modulating both intracellular pH (pH(i and chloride concentration ([Cl(-](i in neurons. Here we show that NCBE is strongly expressed in the retina. As GABA(A receptors conduct both chloride and bicarbonate, we hypothesized that NCBE may be relevant for GABAergic transmission in the retina. Importantly, we found a differential expression of NCBE in bipolar cells: whereas NCBE was expressed on ON and OFF bipolar cell axon terminals, it only localized to dendrites of OFF bipolar cells. On these compartments, NCBE colocalized with the main neuronal chloride extruder KCC2, which renders GABA hyperpolarizing. NCBE was also expressed in starburst amacrine cells, but was absent from neurons known to depolarize in response to GABA, like horizontal cells. Mice lacking NCBE showed decreased visual acuity and contrast sensitivity in behavioral experiments and smaller b-wave amplitudes and longer latencies in electroretinograms. Ganglion cells from NCBE-deficient mice also showed altered temporal response properties. In summary, our data suggest that NCBE may serve to maintain intracellular chloride and bicarbonate concentration in retinal neurons. Consequently, lack of NCBE in the retina may result in changes in pH(i regulation and chloride-dependent inhibition, leading to altered signal transmission and impaired visual function.

  17. Center for computation and visualization of geometric structures. [Annual], Progress report

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-12

    The mission of the Center is to establish a unified environment promoting research, education, and software and tool development. The work is centered on computing, interpreted in a broad sense to include the relevant theory, development of algorithms, and actual implementation. The research aspects of the Center are focused on geometry; correspondingly the computational aspects are focused on three (and higher) dimensional visualization. The educational aspects are likewise centered on computing and focused on geometry. A broader term than education is `communication` which encompasses the challenge of explaining to the world current research in mathematics, and specifically geometry.

  18. A study of visual and musculoskeletal health disorders among computer professionals in NCR Delhi

    Directory of Open Access Journals (Sweden)

    Talwar Richa

    2009-01-01

    Full Text Available Objective: To study the prevalence of health disorders among computer professionals and its association with working environment conditions. Study design: Cross sectional. Materials and Methods: A sample size of 200 computer professionals, from Delhi and NCR which included software developers, call centre workers, and data entry workers. Result: The prevalence of visual problems in the study group was 76% (152/200, and musculoskeletal problems were reported by 76.5% (153/200. It was found that there was a gradual increase in visual complaints as the number of hours spent for working on computers daily increased and the same relation was found to be true for musculoskeletal problems as well. Visual problems were less in persons using antiglare screen, and those with adequate lighting in the room. Musculoskeletal problems were found to be significantly lesser among those using cushioned chairs and soft keypad. Conclusion: A significant proportion of the computer professionals were found to be having health problems and this denotes that the occupational health of the people working in the computer field needs to be emphasized as a field of concern in occupational health.

  19. Control of a visual keyboard using an electrocorticographic brain-computer interface.

    Science.gov (United States)

    Krusienski, Dean J; Shih, Jerry J

    2011-05-01

    Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.

  20. Community-driven development for computational biology at Sprints, Hackathons and Codefests.

    Science.gov (United States)

    Möller, Steffen; Afgan, Enis; Banck, Michael; Bonnal, Raoul J P; Booth, Timothy; Chilton, John; Cock, Peter J A; Gumbel, Markus; Harris, Nomi; Holland, Richard; Kalaš, Matúš; Kaján, László; Kibukawa, Eri; Powel, David R; Prins, Pjotr; Quinn, Jacqueline; Sallou, Olivier; Strozzi, Francesco; Seemann, Torsten; Sloggett, Clare; Soiland-Reyes, Stian; Spooner, William; Steinbiss, Sascha; Tille, Andreas; Travis, Anthony J; Guimera, Roman; Katayama, Toshiaki; Chapman, Brad A

    2014-01-01

    Computational biology comprises a wide range of technologies and approaches. Multiple technologies can be combined to create more powerful workflows if the individuals contributing the data or providing tools for its interpretation can find mutual understanding and consensus. Much conversation and joint investigation are required in order to identify and implement the best approaches. Traditionally, scientific conferences feature talks presenting novel technologies or insights, followed up by informal discussions during coffee breaks. In multi-institution collaborations, in order to reach agreement on implementation details or to transfer deeper insights in a technology and practical skills, a representative of one group typically visits the other. However, this does not scale well when the number of technologies or research groups is large. Conferences have responded to this issue by introducing Birds-of-a-Feather (BoF) sessions, which offer an opportunity for individuals with common interests to intensify their interaction. However, parallel BoF sessions often make it hard for participants to join multiple BoFs and find common ground between the different technologies, and BoFs are generally too short to allow time for participants to program together. This report summarises our experience with computational biology Codefests, Hackathons and Sprints, which are interactive developer meetings. They are structured to reduce the limitations of traditional scientific meetings described above by strengthening the interaction among peers and letting the participants determine the schedule and topics. These meetings are commonly run as loosely scheduled "unconferences" (self-organized identification of participants and topics for meetings) over at least two days, with early introductory talks to welcome and organize contributors, followed by intensive collaborative coding sessions. We summarise some prominent achievements of those meetings and describe differences in how

  1. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    Science.gov (United States)

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  2. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    Science.gov (United States)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  3. Optically Driven Spin Based Quantum Dots for Quantum Computing - Research Area 6 Physics 6.3.2

    Science.gov (United States)

    2015-12-15

    SECURITY CLASSIFICATION OF: This program conducted experimental and theoretical research aimed at developing an optically driven quantum dot quantum ...computer, where, the qubit is the spin of the electron trapped in a self-assembled quantum dot in InAs. Optical manipulation using the trion state...reports. In this reporting period, we discovered the nuclear spin quieting first discovered in 2008 is present in vertically coupled quantum dots but

  4. Addition of visual noise boosts evoked potential-based brain-computer interface.

    Science.gov (United States)

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  5. Visual cortex activation recorded by dynamic emission computed tomography of inhaled xenon 133

    DEFF Research Database (Denmark)

    Henriksen, L; Paulson, O B; Lassen, N A

    1981-01-01

    to be well suited for detecting focal ischemia. In the present study its ability to detect focal hyperemia was investigated in 13 normal subjects studied during rest and during visual stimulation. A flickering light "seen' with eyes open and closed, increased blood flow in the visual cortex by 35% and 22......Regional cerebral blood flow (CBF) was studied tomographically with 133Xe administered by inhalation over a 1-min period at a concentration of 10 mCi/l. A fast rotating ("dynamic') single-photon emission computed tomograph with four detector heads was used, an instrument that has been found......% respectively. Looking at different pictures displayed on a screen raised regional CBF by 26%. The most complex task, reading and copying a text, increased blood flow by 45%. Averaging the different tasks resulted in a mean regional CBF increase in the visual cortex of 35%. The result is comparable...

  6. Visual perception can account for the close relation between numerosity processing and computational fluency.

    Science.gov (United States)

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng

    2015-01-01

    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.

  7. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    Science.gov (United States)

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  8. Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine

    International Nuclear Information System (INIS)

    Ploder, O.; Wagner, A.; Enislidis, G.; Ewers, R.

    1995-01-01

    In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment - the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. (orig.) [de

  9. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Psychological and Pedagogical Features of Teaching Students with Visual Deprivation in Training to Work on a Personal Computer

    Directory of Open Access Journals (Sweden)

    Sokolov V. V.

    2015-08-01

    Full Text Available This article discusses how children with profound sight impairment percieve visual information from a computer screen using synthesized speech and the tactile display system Braille. Describes research of characteristics of user skills development in children with visual deprivation. Illustrated are the main differences in the perception of information from the screen of the user's computer using a visual interface, and users forced to use special software to non-visual access. Provide the most significant results of research and a number of methodical recommendations on educating children of this category in work on the personal computer without visual control. The article may be interest to teachers of informatics, teaching students with profound visual impairment, for parents with children in this category, as well as for scientists, whose professional interests are in the area of pedagogy of the blind

  11. [Ocular and visual alterations in computer workers contact lens wearers: scoping review].

    Science.gov (United States)

    Tauste Francés, Ana; Ronda-Pérez, Elena; Seguí Crespo, María del Mar

    2014-01-01

    The high number of computer workers wearing contact lenses raises the question whether the sum of these two risk factors for eye health may cause a worsening of Computer Vision Syndrome. The aim of this review is to synthesize the knowledge about ocular and visual alterations related with computer use in contact lens wearers. International review of scientific papers (2003-2013) in Spanish and English, using Scoping Review method, in Medline through PubMed and in Scopus. The initial search provided 114 references, after applying inclusion/exclusion criteria six of them were included. All of them reveal that symptoms when using computer are more prevalent in contact lens wearers, with values of symptoms presentation prevalence ranging from 95.0% to 16.9% in wearers and from 57.5% to 9.9% in non-wearers, and four times more likely to develop dry eye [OR: 4.07 (95% CI: 3.52 to 4.71)]. Computer workers suffer more ocular and visual disturbances if they also are contact lens users, but studies are few and non conclusive. Likewise, further research regarding contact lens type and their conditions of use, both in symptoms and tear quality and ocular surface are needed. Silicone hydrogel lenses are associated with more comfort.

  12. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.

    Science.gov (United States)

    Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong

    2017-10-01

    In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society

  13. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  14. Visualizing water

    Science.gov (United States)

    Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.

    2016-12-01

    A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.

  15. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  16. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    International Nuclear Information System (INIS)

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison

  17. Using the computer-driven VR environment to promote experiences of natural world immersion

    Science.gov (United States)

    Frank, Lisa A.

    2013-03-01

    In December, 2011, over 800 people experienced the exhibit, :"der"//pattern for a virtual environment, created for the fully immersive CAVETM at the University of Wisconsin-Madison. This exhibition took my nature-based photographic work and reinterpreted it for virtual reality (VR).Varied responses such as: "It's like a moment of joy," or "I had to see it twice," or "I'm still thinking about it weeks later" were common. Although an implied goal of my 2D artwork is to create a connection that makes viewers more aware of what it means to be a part of the natural world, these six VR environments opened up an unexpected area of inquiry that my 2D work has not. Even as the experience was mediated by machines, there was a softening at the interface between technology and human sensibility. Somehow, for some people, through the unlikely auspices of a computer-driven environment, the project spoke to a human essence that they connected with in a way that went beyond all expectations and felt completely out of my hands. Other interesting behaviors were noted: in some scenarios some spoke of intense anxiety, acrophobia, claustrophobia-even fear of death when the scene took them underground. These environments were believable enough to cause extreme responses and disorientation for some people; were fun, pleasant and wonder-filled for most; and were liberating, poetic and meditative for many others. The exhibition seemed to promote imaginative skills, creativity, emotional insight, and environmental sensitivity. It also revealed the CAVETM to be a powerful tool that can encourage uniquely productive experiences. Quite by accident, I watched as these nature-based environments revealed and articulated an essential relationship between the human spirit and the physical world. The CAVETM is certainly not a natural space, but there is clear potential to explore virtual environments as a path to better and deeper connections between people and nature. We've long associated contact

  18. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  19. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  20. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    Science.gov (United States)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  1. Applications of the pipeline environment for visual informatics and genomics computations

    Directory of Open Access Journals (Sweden)

    Genco Alex

    2011-07-01

    Full Text Available Abstract Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The

  2. The Relationship between Perceived Computer Competence and the Employment Outcomes of Transition-Aged Youths with Visual Impairments

    Science.gov (United States)

    Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora

    2013-01-01

    Introduction: The study reported here explored the relationship between the self-perceived computer competence and employment outcomes of transition-aged youths with visual impairments. Methods: Data on 200 in-school youths and 190 out-of-school youths with a primary disability of visual impairment were retrieved from the database of the first…

  3. A Brain Computer Interface for Robust Wheelchair Control Application Based on Pseudorandom Code Modulated Visual Evoked Potential

    DEFF Research Database (Denmark)

    Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan

    2015-01-01

    In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...

  4. The Study of Learners' Preference for Visual Complexity on Small Screens of Mobile Computers Using Neural Networks

    Science.gov (United States)

    Wang, Lan-Ting; Lee, Kun-Chou

    2014-01-01

    The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…

  5. Computer aided system for segmentation and visualization of microcalcifications in digital mammograms

    International Nuclear Information System (INIS)

    Reljin, B.; Reljin, I.; Milosevic, Z.; Stojic, T.

    2009-01-01

    Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal 'images' are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph. (authors)

  6. Visualization and analysis of flow patterns of human carotid bifurcation by computational fluid dynamics

    International Nuclear Information System (INIS)

    Xue Yunjing; Gao Peiyi; Lin Yan

    2007-01-01

    Objective: To investigate flow patterns at carotid bifurcation in vivo by combining computational fluid dynamics (CFD)and MR angiography imaging. Methods: Seven subjects underwent contrast-enhanced MR angiography of carotid artery in Siemens 3.0 T MR. Flow patterns of the carotid artery bifurcation were calculated and visualized by combining MR vascular imaging post-processing and CFD. Results: The flow patterns of the carotid bifurcations in 7 subjects were varied with different phases of a cardiac cycle. The turbulent flow and back flow occurred at bifurcation and proximal of internal carotid artery (ICA) and external carotid artery (ECA), their occurrence and conformation were varied with different phase of a cardiac cycle. The turbulent flow and back flow faded out quickly when the blood flow to the distal of ICA and ECA. Conclusion: CFD combined with MR angiography can be utilized to visualize the cyclical change of flow patterns of carotid bifurcation with different phases of a cardiac cycle. (authors)

  7. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  8. Computational genetic neuroanatomy of the developing mouse brain: dimensionality reduction, visualization, and clustering

    Science.gov (United States)

    2013-01-01

    Background The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. Results In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Conclusions Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship. PMID:23845024

  9. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    Science.gov (United States)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming

  10. Visual ergonomic aspects of glare on computer displays: glossy screens and angular dependence

    Science.gov (United States)

    Brunnström, Kjell; Andrén, Börje; Konstantinides, Zacharias; Nordström, Lukas

    2007-02-01

    Recently flat panel computer displays and notebook computer are designed with a so called glare panel i.e. highly glossy screens, have emerged on the market. The shiny look of the display appeals to the costumers, also there are arguments that the contrast, colour saturation etc improves by using a glare panel. LCD displays suffer often from angular dependent picture quality. This has been even more pronounced by the introduction of Prism Light Guide plates into displays for notebook computers. The TCO label is the leading labelling system for computer displays. Currently about 50% of all computer displays on the market are certified according to the TCO requirements. The requirements are periodically updated to keep up with the technical development and the latest research in e.g. visual ergonomics. The gloss level of the screen and the angular dependence has recently been investigated by conducting user studies. A study of the effect of highly glossy screens compared to matt screens has been performed. The results show a slight advantage for the glossy screen when no disturbing reflexes are present, however the difference was not statistically significant. When disturbing reflexes are present the advantage is changed into a larger disadvantage and this difference is statistically significant. Another study of angular dependence has also been performed. The results indicates a linear relationship between the picture quality and the centre luminance of the screen.

  11. Visual interaction: models, systems, prototypes. The Pictorial Computing Laboratory at the University of Rome La Sapienza.

    Science.gov (United States)

    Bottoni, Paolo; Cinque, Luigi; De Marsico, Maria; Levialdi, Stefano; Panizzi, Emanuele

    2006-06-01

    This paper reports on the research activities performed by the Pictorial Computing Laboratory at the University of Rome, La Sapienza, during the last 5 years. Such work, essentially is based on the study of humancomputer interaction, spans from metamodels of interaction down to prototypes of interactive systems for both synchronous multimedia communication and groupwork, annotation systems for web pages, also encompassing theoretical and practical issues of visual languages and environments also including pattern recognition algorithms. Some applications are also considered like e-learning and collaborative work.

  12. Computational Topology Counterexamples with 3D Visualization of Bézier Curves

    Directory of Open Access Journals (Sweden)

    J. Li

    2012-10-01

    Full Text Available For applications in computing, Bézier curves are pervasive and are defined by a piecewise linear curve L which is embedded in R3 and yields a smooth polynomial curve C embedded in R3. It is of interest to understand when L and C have the same embeddings. One class ofc ounterexamples is shown for L being unknotted, while C is knotted. Another class of counterexamples is created where L is equilateral and simple, while C is self-intersecting. These counterexamples were discovered using curve visualizing software and numerical algorithms that produce general procedures to create more examples.

  13. A computational model for knowledge-driven monitoring of nuclear power plant operators based on information theory

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2006-01-01

    To develop operator behavior models such as IDAC, quantitative models for the cognitive activities of nuclear power plant (NPP) operators in abnormal situations are essential. Among them, only few quantitative models for the monitoring and detection have been developed. In this paper, we propose a computational model for the knowledge-driven monitoring, which is also known as model-driven monitoring, of NPP operators in abnormal situations, based on the information theory. The basic assumption of the proposed model is that the probability that an operator shifts his or her attention to an information source is proportional to the expected information from the information source. A small experiment performed to evaluate the feasibility of the proposed model shows that the predictions made by the proposed model have high correlations with the experimental results. Even though it has been argued that heuristics might play an important role on human reasoning, we believe that the proposed model can provide part of the mathematical basis for developing quantitative models for knowledge-driven monitoring of NPP operators when NPP operators are assumed to behave very logically

  14. OpenDx programs for visualization of computational fluid dynamics (CFD) simulations

    International Nuclear Information System (INIS)

    Silva, Marcelo Mariano da

    2008-01-01

    The search for high performance and low cost hardware and software solutions always guides the developments performed at the IEN parallel computing laboratory. In this context, this dissertation about the building of programs for visualization of computational fluid dynamics (CFD) simulations using the open source software OpenDx was written. The programs developed are useful to produce videos and images in two or three dimensions. They are interactive, easily to use and were designed to serve fluid dynamics researchers. A detailed description about how this programs were developed and the complete instructions of how to use them was done. The use of OpenDx as development tool is also introduced. There are examples that help the reader to understand how programs can be useful for many applications. (author)

  15. Future Directions in Computer Graphics and Visualization: From CG&A's Editorial Board

    Energy Technology Data Exchange (ETDEWEB)

    Encarnacao, L. M.; Chuang, Yung-Yu; Stork, Andre; Kasik, David; Rhyne, Theresa-Marie; Avila, Lisa; Kohlhammer, Jorn; LaViola, Joseph; Tory, Melanie; Dill, John; Domik, Gitta; Owen, G. Scott; Wong, Pak C.

    2015-01-01

    With many new members joining the CG&A editorial board over the past year, and with a renewed commitment to not only document the state of the art in computer graphics research and applications but to anticipate and where possible foster future areas of scientific discourse and industrial practice, we asked editorial and advisory council members about where they see their fields of expertise going. The answers compiled here aren’t meant to be all encompassing or deterministic when it comes to the opportunities computer graphics and interactive visualization hold for the future. Instead, we aim to accomplish two things: give a more in-depth introduction of members of the editorial board to the CG&A readership and encourage cross-disciplinary discourse toward approaching, complementing, or disputing the visions laid out in this compilation.

  16. The development of an oscilloscope visualization system for the hybrid computer E.A.I. 8900

    International Nuclear Information System (INIS)

    Djukanovic, Radojka

    1970-01-01

    This report was the first subject of a thesis submitted to the Faculte des Sciences in Paris, on the 30 of June 1970 by Mistress Radojka Djukanovic-Remsak, in order to obtain the grade of doctor engineer. A visualization system was studied and developed, whereby various figures could be displayed on an oscilloscope screen without a memory by means of points and segments. This system was realized by the utilisation of the analog and logic elements of an analog computer E.A.I. 8800 and a series of programs intended to be used in conjunction with the E.A.I. 8400 digital computer. The second subject: 'The evolution of multiprogramming' was dealt with in a note CEA-N-1346. (author) [fr

  17. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  18. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  19. Detecting and visualizing internal 3D oleoresin in agarwood by means of micro-computed tomography

    International Nuclear Information System (INIS)

    Khairiah Yazid; Roslan Yahya; Mat Rosol Awang

    2012-01-01

    Detection and analysis of oleoresin is particularly significant since the commercial value of agarwood is related to the quantity of oleoresins that are present. A modern technique of non-destructive may reach the interior region of the wood. Currently, tomographic image data in particular is most commonly visualized in three dimensions using volume rendering. The aim of this paper is to explore the potential of high resolution non-destructive 3D visualization technique, X-ray micro-computed tomography, as imaging tools to visualize micro-structure oleoresin in agarwood. Investigations involving desktop X-ray micro-tomography system on high grade agarwood sample, performed at the Centre of Tomography in Nuclear Malaysia, demonstrate the applicability of the method. Prior to experiments, a reference test was conducted to stimulate the attenuation of oleoresin in agarwood. Based on the experiment results, micro-CT imaging with voxel size 7.0 μm is capable to of detecting oleoresin and pores in agarwood. This imaging technique, although sophisticated can be used for standard development especially in grading of agarwood for commercial activities. (author)

  20. Initial experience with visualizing hand and foot tendons by dual-energy computed tomography.

    Science.gov (United States)

    Deng, Kai; Sun, Cong; Liu, Cheng; Ma, Rui

    2009-01-01

    To assess the feasibility of visualizing hand and foot tendons by dual-energy computed tomography (CT). Twenty patients who suffered from hand or feet pains were scanned on dual-source CT (Definition, Forchheim, Germany) with dual-energy mode at tube voltages of 140 and 80 kV and a corresponding ratio of 1:4 between tube currents. The reconstructed images were postprocessed by volume rendering techniques (VRT) and multiplanar reconstruction (MPR). All of the suspected lesions were confirmed by surgery or follow-up studies. Twelve patients (total of 24 hands and feet, respectively) were found to be normal and the other eight patients (total of nine hands and feet, respectively) were found abnormal. Dual-energy techniques are very useful in visualizing tendons of the hands and feet, such as flexor pollicis longus tendon, flexor digitorum superficialis/profundus tendon, Achilles tendon, extensor hallucis longus tendon, and extensor digitorum longus tendon, etc. It can depict the whole shape of the tendons and their fixation points clearly. Peroneus longus tendon in the sole of the foot was not displayed very well. The distal ends of metacarpophalangeal joints with extensor digitoium tendon and extensor pollicis longus tendon were poorly shown. The lesions of tendons such as the circuitry, thickening, and adherence were also shown clearly. Dual-energy CT offers a new method to visualize tendons of the hand and foot. It could clearly display both anatomical structures and pathologic changes of hand and foot tendons.

  1. Computational intelligence in multi-feature visual pattern recognition hand posture and face recognition using biologically inspired approaches

    CERN Document Server

    Pisharady, Pramod Kumar; Poh, Loh Ai

    2014-01-01

    This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...

  2. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  3. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.; Clancy, Paulette

    2012-01-01

    used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice

  4. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    Science.gov (United States)

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  5. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  6. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    Science.gov (United States)

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  7. Electromagnetic Computation and Visualization of Transmission Particle Model and Its Simulation Based on GPU

    Directory of Open Access Journals (Sweden)

    Yingnian Wu

    2014-01-01

    Full Text Available Electromagnetic calculation plays an important role in both military and civic fields. Some methods and models proposed for calculation of electromagnetic wave propagation in a large range bring heavy burden in CPU computation and also require huge amount of memory. Using the GPU to accelerate computation and visualization can reduce the computational burden on the CPU. Based on forward ray-tracing method, a transmission particle model (TPM for calculating electromagnetic field is presented to combine the particle method. The movement of a particle obeys the principle of the propagation of electromagnetic wave, and then the particle distribution density in space reflects the electromagnetic distribution status. The algorithm with particle transmission, movement, reflection, and diffraction is described in detail. Since the particles in TPM are completely independent, it is very suitable for the parallel computing based on GPU. Deduction verification of TPM with the electric dipole antenna as the transmission source is conducted to prove that the particle movement itself represents the variation of electromagnetic field intensity caused by diffusion. Finally, the simulation comparisons are made against the forward and backward ray-tracing methods. The simulation results verified the effectiveness of the proposed method.

  8. Effects of Static Visuals and Computer-Generated Animations in Facilitating Immediate and Delayed Achievement in the EFL Classroom

    Science.gov (United States)

    Lin, Huifen; Chen, Tsuiping; Dwyer, Francis M.

    2006-01-01

    The purpose of this experimental study was to compare the effects of using static visuals versus computer-generated animation to enhance learners' comprehension and retention of a content-based lesson in a computer-based learning environment for learning English as a foreign language (EFL). Fifty-eight students from two EFL reading sections were…

  9. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    Science.gov (United States)

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  10. Broad-Band Visually Evoked Potentials: Re(convolution in Brain-Computer Interfacing.

    Directory of Open Access Journals (Sweden)

    Jordy Thielen

    Full Text Available Brain-Computer Interfaces (BCIs allow users to control devices and communicate by using brain activity only. BCIs based on broad-band visual stimulation can outperform BCIs using other stimulation paradigms. Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs that can be reliably used in BCI for high-speed communication in speller applications. In this study, we report a novel paradigm for a BBVEP-based BCI that utilizes a generative framework to predict responses to broad-band stimulation sequences. In this study we designed a BBVEP-based BCI using modulated Gold codes to mark cells in a visual speller BCI. We defined a linear generative model that decomposes full responses into overlapping single-flash responses. These single-flash responses are used to predict responses to novel stimulation sequences, which in turn serve as templates for classification. The linear generative model explains on average 50% and up to 66% of the variance of responses to both seen and unseen sequences. In an online experiment, 12 participants tested a 6 × 6 matrix speller BCI. On average, an online accuracy of 86% was reached with trial lengths of 3.21 seconds. This corresponds to an Information Transfer Rate of 48 bits per minute (approximately 9 symbols per minute. This study indicates the potential to model and predict responses to broad-band stimulation. These predicted responses are proven to be well-suited as templates for a BBVEP-based BCI, thereby enabling communication and control by brain activity only.

  11. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  12. Computer simulation of a plasma focus device driven by a magnetic pulser

    Energy Technology Data Exchange (ETDEWEB)

    Georgescu, N; Zoita, V [Inst. of Physics and Technology of Radiation Devices, Bucharest (Romania); Larour, J [Ecole Polytechnique, Palaiseau (France). Lab. de Physique des Milieux Ionises

    1997-12-31

    A plasma focus device, driven by a magnetic pulse compression circuit, is simulated by using a PSPICE proffam. The elaborated program is much simpler than the other existing ones, which analyse the circuit by directly solving a system of integral-differential equations. The pre-pulse voltage and the high-voltage rise-times are obtained for a set of values of the bypass impedance (R or L). The optimum bypass impedance turns out to be an inductance. During the discharge period, the plasma load is considered as an LR impedance, each component being time dependent. A method is presented for giving us the possibility to introduce the time varying impedances in a PSPICE program. Finally, a set of simulation results (plasma current and voltage, plasma magnetic energy, plasma sheath mechanical energy, pinch voltage) is shown. The results are in good agreement with the classical experimental data. (author). 2 figs., 4 refs.

  13. Transportation dose analysis using an interactive menu-driven computer program

    International Nuclear Information System (INIS)

    Strenge, D.L.; Peloquin, R.A.

    1984-10-01

    An easy-to-use software package is described for performing radiological consequence analyses for transportation scenarios involving truck or rail transport of spent fuel, HLW and other radioactive waste forms. The consequence analysis is based on the unit radiological factors (person-rem/km) developed by the Transportation Technology Center (Sandia National Laboratories). These generic unit radiological factors are combined with user-supplied information describing transporation distances, routes and waste types to estimate total exposure of the population. The software was developed for use in preparing the Environmental Assessment for the Monitored Retrievable Storage Program and is suitable for such analyses as siting waste repositories. The key feature of the software is the user-oriented, menu-driven interactive input mode available as an alternative to formatted input. The interactive input option allows the user to supply all input data, edit the data and run the program. Output reports can be diverted to a high-speed printer

  14. Computer driven optical keratometer and method of evaluating the shape of the cornea

    Science.gov (United States)

    Baroth, Edmund C. (Inventor); Mouneimme, Samih A. (Inventor)

    1994-01-01

    An apparatus and method for measuring the shape of the cornea utilize only one reticle to generate a pattern of rings projected onto the surface of a subject's eye. The reflected pattern is focused onto an imaging device such as a video camera and a computer compares the reflected pattern with a reference pattern stored in the computer's memory. The differences between the reflected and stored patterns are used to calculate the deformation of the cornea which may be useful for pre-and post-operative evaluation of the eye by surgeons.

  15. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    International Nuclear Information System (INIS)

    Bouchard, Kristofer E.

    2016-01-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  16. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  17. NWChem Meeting on Science Driven Petascale Computing and Capability Development at EMSL

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.

    2007-02-19

    On January 25, and 26, 2007, an NWChem meeting was held that was attended by 65 scientists from 29 institutions including 22 universities and 5 national laboratories. The goals of the meeting were to look at major scientific challenges that could be addressed by computational modeling in environmental molecular sciences, and to identify the associated capability development needs. In addition, insights were sought into petascale computing developments in computational chemistry. During the meeting common themes were identified that will drive the need for the development of new or improved capabilities in NWChem. Crucial areas of development that the developer's team will be focusing on are (1) modeling of dynamics and kinetics in chemical transformations, (2) modeling of chemistry at interfaces and in the condensed phase, and (3) spanning longer time scales in biological processes modeled with molecular dynamics. Various computational chemistry methodologies were discussed during the meeting, which will provide the basis for the capability developments in the near or long term future of NWChem.

  18. Understanding and Improving Blind Students' Access to Visual Information in Computer Science Education

    Science.gov (United States)

    Baker, Catherine M.

    Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter and prepares them for careers. However, computer science is typically taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand and decrease those barriers. The first projects I present looked at the barriers that blind students face. I first present the results of my survey and interviews with blind students with degrees in computer science or related fields. This work highlighted the many barriers that these blind students faced. I then followed-up on one of the barriers mentioned, access to technology, by doing a preliminary accessibility evaluation of six popular integrated development environments (IDEs) and code editors. I found that half were unusable and all had some inaccessible portions. As access to visual information is a barrier in computer science education, I present three projects I have done to decrease this barrier. The first project is Tactile Graphics with a Voice (TGV). This project investigated an alternative to Braille labels for those who do not know Braille and showed that TGV was a potential alternative. The next project was StructJumper, which created a modified abstract syntax tree that blind programmers could use to navigate through code with their screen reader. The evaluation showed that users could navigate more quickly and easily determine the relationships of lines of code when they were using StructJumper compared to when they were not. Finally, I present a tool for dynamic graphs (the type with nodes and edges) which had two different modes for handling focus changes when moving between graphs. I found that the modes support different approaches for exploring the graphs and therefore preferences are mixed based on the user's preferred approach. However, both modes had similar accuracy in completing the tasks. These projects are a first step towards

  19. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    Science.gov (United States)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  20. (Covert attention and visual speller design in an ERP-based brain-computer interface

    Directory of Open Access Journals (Sweden)

    Treder Matthias S

    2010-05-01

    Full Text Available Abstract Background In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP. An ERP-based brain-computer interface (BCI exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention or whether it is also feasible for targets in the visual periphery (covert attention. Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Method Healthy participants (N = 13 performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (covert attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. Results We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Conclusions Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower

  1. DCA++: A case for science driven application development for leadership computing platforms

    Energy Technology Data Exchange (ETDEWEB)

    Summers, Michael S; Alvarez, Gonzalo; Meredith, Jeremy; Maier, Thomas A [Computer Science and Mathematics Division, Oak Ridge National Laboratory, P. O. Box 2008, Mail Stop 6164, Oak Ridge, TN 37831 (United States); Schulthess, Thomas C, E-mail: schulthess@cscs.c [Swiss National Supercomputer Center and Institute for Theoretical Physics, ETH Zurich, CSCS MAN E 133, Galeria 2, CH-9628 Manno (Switzerland)

    2009-07-01

    The DCA++ code was one of the early science applications that ran on jaguar at the National Center for Computational Sciences, and the first application code to sustain a petaflop/s under production conditions on a general-purpose supercomputer. The code implements a quantum cluster method with a Quantum Monte Carlo kernel to solve the 2D Hubbard model for high-temperature superconductivity. It is implemented in C++, making heavy use of the generic programming model. In this paper, we discuss how this code was developed, reaching scalability and high efficiency on the world's fastest supercomputer in only a few years. We show how the use of generic concepts combined with systematic refactoring of codes is a better strategy for computational sciences than a comprehensive upfront design.

  2. DCA++: A case for science driven application development for leadership computing platforms

    International Nuclear Information System (INIS)

    Summers, Michael S; Alvarez, Gonzalo; Meredith, Jeremy; Maier, Thomas A; Schulthess, Thomas C

    2009-01-01

    The DCA++ code was one of the early science applications that ran on jaguar at the National Center for Computational Sciences, and the first application code to sustain a petaflop/s under production conditions on a general-purpose supercomputer. The code implements a quantum cluster method with a Quantum Monte Carlo kernel to solve the 2D Hubbard model for high-temperature superconductivity. It is implemented in C++, making heavy use of the generic programming model. In this paper, we discuss how this code was developed, reaching scalability and high efficiency on the world's fastest supercomputer in only a few years. We show how the use of generic concepts combined with systematic refactoring of codes is a better strategy for computational sciences than a comprehensive upfront design.

  3. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    OpenAIRE

    Parth Gargava; Krishna Asawa

    2017-01-01

    A Brain Computer Interface (BCI) is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command naviga...

  4. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  5. Data driven computing by the morphing fast Fourier transform ensemble Kalman filter in epidemic spread simulations

    Science.gov (United States)

    Mandel, Jan; Beezley, Jonathan D.; Cobb, Loren; Krishnamurthy, Ashok

    2010-01-01

    The FFT EnKF data assimilation method is proposed and applied to a stochastic cell simulation of an epidemic, based on the S-I-R spread model. The FFT EnKF combines spatial statistics and ensemble filtering methodologies into a localized and computationally inexpensive version of EnKF with a very small ensemble, and it is further combined with the morphing EnKF to assimilate changes in the position of the epidemic. PMID:21031155

  6. A flashing driven moderator cooling system for CANDU reactors: Experimental and computational results

    International Nuclear Information System (INIS)

    Khartabil, H.F.

    2000-01-01

    A flashing-driven passive moderator cooling system is being developed at AECL for CANDU reactors. Preliminary simulations and experiments showed that the concept was feasible at normal operating power. However, flow instabilities were observed at low powers under conditions of variable and constant calandria inlet temperatures. This finding contradicted code predictions that suggested the loop should be stable at all powers if the calandria inlet temperature was constant. This paper discusses a series of separate-effects tests that were used to identify the sources of low-power instabilities in the experiments, and it explores methods to avoid them. It concludes that low-power instabilities can be avoided, thereby eliminating the discrepancy between the experimental and code results. Two factors were found to be important for loop stability: (1) oscillations in the calandria outlet temperature, and (2) flashing superheat requirements, and the presence of nucleation sites. By addressing these factors, we could make the loop operate in a stable manner over the whole power range and we could obtain good agreement between the experimental and code results. (author)

  7. The development of hand-centred visual representations in the primate brain: a computer modelling study using natural visual scenes.

    Directory of Open Access Journals (Sweden)

    Juan Manuel Galeazzi

    2015-12-01

    Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

  8. Steady-state natural circulation analysis with computational fluid dynamic codes of a liquid metal-cooled accelerator driven system

    International Nuclear Information System (INIS)

    Abanades, A.; Pena, A.

    2009-01-01

    A new innovative nuclear installation is under research in the nuclear community for its potential application to nuclear waste management and, above all, for its capability to enhance the sustainability of nuclear energy in the future as component of a new nuclear fuel cycle in which its efficiency in terms of primary Uranium ore profit and radioactive waste generation will be improved. Such new nuclear installations are called accelerator driven system (ADS) and are the result of a profitable symbiosis between accelerator technology, high-energy physics and reactor technology. Many ADS concepts are based on the utilization of heavy liquid metal (HLM) coolants due to its neutronic and thermo-physical properties. Moreover, such coolants permit the operation in free circulation mode, one of the main aims of passive systems. In this paper, such operation regime is analysed in a proposed ADS design applying computational fluid dynamics (CFD)

  9. Evaluation of a subject-specific, torque-driven computer simulation model of one-handed tennis backhand groundstrokes.

    Science.gov (United States)

    Kentel, Behzat B; King, Mark A; Mitchell, Sean R

    2011-11-01

    A torque-driven, subject-specific 3-D computer simulation model of the impact phase of one-handed tennis backhand strokes was evaluated by comparing performance and simulation results. Backhand strokes of an elite subject were recorded on an artificial tennis court. Over the 50-ms period after impact, good agreement was found with an overall RMS difference of 3.3° between matching simulation and performance in terms of joint and racket angles. Consistent with previous experimental research, the evaluation process showed that grip tightness and ball impact location are important factors that affect postimpact racket and arm kinematics. Associated with these factors, the model can be used for a better understanding of the eccentric contraction of the wrist extensors during one-handed backhand ground strokes, a hypothesized mechanism of tennis elbow.

  10. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  11. Embolic intracranial arterial occlusion visualized by non-enhanced computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori [National Cardiovascular Center, Suita, Osaka (Japan)

    1984-12-01

    A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal carotid artery. The importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed.

  12. Steady State Visual Evoked Potential Based Brain-Computer Interface for Cognitive Assessment

    DEFF Research Database (Denmark)

    Westergren, Nicolai; Bendtsen, Rasmus L.; Kjær, Troels W.

    2016-01-01

    decline is important. Cognitive decline may be detected using fullyautomated computerized assessment. Such systems will provide inexpensive and widely available screenings of cognitive ability. The aim of this pilot study is to develop a real time steady state visual evoked potential (SSVEP) based brain-computer...... interface (BCI) for neurological cognitive assessment. It is intended for use by patients who suffer from diseases impairing their motor skills, but are still able to control their gaze. Results are based on 11 healthy test subjects. The system performance have an average accuracy of 100% ± 0%. The test...... subjects achieved an information transfer rate (ITR) of 14:64 bits/min ± 7:63 bits=min and a subject test performance of 47:22% ± 34:10%. This study suggests that BCI may be applicable in practice as a computerized cognitive assessment tool. However, many improvements are required for the system...

  13. Embolic intracranial arterial occlusion visualized by non-enhanced computed tomography

    International Nuclear Information System (INIS)

    Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori

    1984-01-01

    A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal ca rotid artery. An importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed. (author)

  14. Tophaceous Gout in an Anorectic Patient Visualized by Dual Energy Computed Tomography (DECT)

    DEFF Research Database (Denmark)

    Christensen, Heidi Dahl; Sheta, Hussam; Birger Morillon, Melanie

    2016-01-01

    BACKGROUND Gout is characterized by deposition of uric acid crystals (monosodium urate) in tissues and fluids. This can cause acute inflammatory arthritis. The 2015 ACR/EULAR criteria for the diagnosis of gout include dual energy computed tomography (DECT)-demonstrated monosodium urate crystals...... known to have anorexia nervosa. During our clinical examination, we detected plenty of tophi on both hands, but no swollen joints. The diagnosis of gout was made by visualizing crystals in a biopsy from a tophus. The first line of treatment was allopurinol, the second line was rasburicase...... and soft tissue. CONCLUSIONS DECT is an imaging modality useful to assess urate crystal deposits at diagnosis of gout and could be considered during treatment evaluation. Lack of adherence to treatment should be considered when P-urate values vary significantly and when DECT scans over years persistently...

  15. Image-based computational quantification and visualization of genetic alterations and tumour heterogeneity.

    Science.gov (United States)

    Zhong, Qing; Rüschoff, Jan H; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J; Rupp, Niels J; Fankhauser, Christian; Buhmann, Joachim M; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C; Jochum, Wolfram; Wild, Peter J

    2016-04-07

    Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.

  16. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  17. The study of infrared target recognition at sea background based on visual attention computational model

    Science.gov (United States)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  18. Micro-computed tomography visualization of the vestigial alimentary canal in adult oestrid flies.

    Science.gov (United States)

    Martín-Vega, D; Garbout, A; Ahmed, F; Ferrer, L M; Lucientes, J; Colwell, D D; Hall, M J R

    2018-02-16

    Oestrid flies (Diptera: Oestridae) do not feed during the adult stage as they acquire all necessary nutrients during the parasitic larval stage. The adult mouthparts and digestive tract are therefore frequently vestigial; however, morphological data on the alimentary canal in adult oestrid flies are scarce and a proper visualization of this organ system within the adult body is lacking. The present work visualizes the morphology of the alimentary canal in adults of two oestrid species, Oestrus ovis L. and Hypoderma lineatum (de Villiers), with the use of non-invasive micro-computed tomography (micro-CT) and compares it with the highly developed alimentary canal of the blow fly Calliphora vicina Robineau-Desvoidy (Diptera: Calliphoridae). Both O. ovis and H. lineatum adults showed significant reductions of the cardia and the diameter of the digestive tract, an absence of the helicoidal portion of the midgut typical of other cyclorrhaphous flies, and a lack of crop and salivary glands. Given the current interest in the alimentary canal in adult dipterans in biomedical and developmental biology studies, further understanding of the morphology and development of this organ system in adult oestrids may provide valuable new insights in several areas of research. © 2018 The Royal Entomological Society.

  19. Visualization of haemophilic arthropathy in F8(-/-) rats by ultrasonography and micro-computed tomography

    DEFF Research Database (Denmark)

    Christensen, K R; Roepstorff, K; Petersen, M

    2017-01-01

    opportunities. Recently, a F8(-/-) rat model of HA was developed. The size of the rat allows for convenient and high resolution imaging of the joints, which could enable in vivo studies of HA development. AIM: To determine whether HA in the F8(-/-) rat can be visualized using ultrasonography (US) and micro......-computed tomography (μCT). METHODS: Sixty F8(-/-) and 20 wild-type rats were subjected to a single or two induced knee bleeds. F8(-/-) rats were treated with either recombinant human FVIII (rhFVIII) or vehicle before the induction of knee bleeds. Haemophilic arthropathy was visualized using in vivo US and ex vivo μCT......, and the observations correlated with histological evaluation. RESULTS: US and μCT detected pathologies in the knee related to HA. There was a strong correlation between disease severity determined by μCT and histopathology. rhFVIII treatment reduced the pathology identified with both imaging techniques. CONCLUSION: US...

  20. An independent brain-computer interface using covert non-spatial visual selective attention

    Science.gov (United States)

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  1. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    Science.gov (United States)

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-11-01

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements. Copyright © 2017. Published by Elsevier Ltd.

  2. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  3. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    International Nuclear Information System (INIS)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-01-01

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry

  4. A manifold learning approach to data-driven computational materials and processes

    Science.gov (United States)

    Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco

    2017-10-01

    Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.

  5. Automatic analysis of digitized TV-images by a computer-driven optical microscope

    International Nuclear Information System (INIS)

    Rosa, G.; Di Bartolomeo, A.; Grella, G.; Romano, G.

    1997-01-01

    New methods of image analysis and three-dimensional pattern recognition were developed in order to perform the automatic scan of nuclear emulsion pellicles. An optical microscope, with a motorized stage, was equipped with a CCD camera and an image digitizer, and interfaced to a personal computer. Selected software routines inspired the design of a dedicated hardware processor. Fast operation, high efficiency and accuracy were achieved. First applications to high-energy physics experiments are reported. Further improvements are in progress, based on a high-resolution fast CCD camera and on programmable digital signal processors. Applications to other research fields are envisaged. (orig.)

  6. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  7. The Museum Wearable: Real-Time Sensor-Driven Understanding of Visitors' Interests for Personalized Visually-Augmented Museum Experiences.

    Science.gov (United States)

    Sparacino, Flavia

    This paper describes the museum wearable: a wearable computer that orchestrates an audiovisual narration as a function of the visitors' interests gathered from their physical path in the museum and length of stops. The wearable consists of a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual…

  8. The history of visual magic in computers how beautiful images are made in CAD, 3D, VR and AR

    CERN Document Server

    Peddie, Jon

    2013-01-01

    If you have ever looked at a fantastic adventure or science fiction movie, or an amazingly complex and rich computer game, or a TV commercial where cars or gas pumps or biscuits behaved liked people and wondered, ""How do they do that?"",  then you've experienced the magic of 3D worlds generated by a computer.3D in computers began as a way to represent automotive designs and illustrate the construction of molecules. 3D graphics use evolved to visualizations of simulated data and artistic representations of imaginary worlds. In order to overcome the processing limitations of the computer, graph

  9. Transition Manifolds of Complex Metastable Systems: Theory and Data-Driven Computation of Effective Dynamics.

    Science.gov (United States)

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-01-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  10. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    Science.gov (United States)

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  11. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    Directory of Open Access Journals (Sweden)

    Parth Gargava

    2017-08-01

    Full Text Available A Brain Computer Interface (BCI is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command navigation of the robot. This prototype works on features learning and classification centric techniques using support vector machine. The suggested pipeline, ensures successful navigation of a robot in four directions in real time with accuracy of 93 percent.

  12. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    International Nuclear Information System (INIS)

    Smith, W. Spencer; Koothoor, Mimitha

    2016-01-01

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification

  13. Computational modeling of direct-drive fusion pellets and KrF-driven foil experiments

    International Nuclear Information System (INIS)

    Gardner, J.H.; Schmitt, A.J.; Dahlburg, J.P.; Pawley, C.J.; Bodner, S.E.; Obenschain, S.P.; Serlin, V.; Aglitskiy, Y.

    1998-01-01

    FAST is a radiation transport hydrodynamics code that simulates laser matter interactions of relevance to direct-drive laser fusion target design. FAST solves the Euler equations of compressible flow using the Flux-Corrected Transport finite volume method. The advection algorithm provides accurate computation of flows from nearly incompressible vortical flows to those that are highly compressible and dominated by strong pressure and density gradients. In this paper we describe the numerical techniques and physics packages. FAST has also been benchmarked with Nike laser facility experiments in which linearly perturbed, low adiabat planar plastic targets are ablatively accelerated to velocities approaching 10 7 cm/s. Over a range of perturbation wavelengths, the code results agree with the measured Rayleigh endash Taylor growth from the linear through the deeply nonlinear regimes. FAST has been applied to the two-dimensional spherical simulation design to provide surface finish and laser bandwidth tolerances for a promising new direct-drive pellet that uses a foam ablator

  14. Computer-extended series for a source/sink driven gas centrifuge

    International Nuclear Information System (INIS)

    Berger, M.H.

    1987-01-01

    We have reformulated the general problem of internal flow in a modern high speed gas centrifuge with sources and sinks in such a way as to obtain new, simple, rigorous closed form analytical solutions. Both symmetric and antisymmetric drives lead us to an ordinary differential equation in place of the usual inhomogeneous Onsager partial differential equation. Owing to the difficulties of exactly solving this sixth order, inhomogeneous, variable coefficient ordinary differential equation we appeal to the power of perturbation theory and techniques. Two extreme parameter regimes are identified, the so-called semi-long bowl approximation and a new short bowl approximation. Only the former class of problems is treated here. The long bowl solution for axial drive is the correct leading order term, just as for pure thermal drive. New 0(1) results are derived for radial, drag and heat drives in two dimensions. Then regular asymptotic, even ordered power series expansions for the flow field are carried out on the computer to O (epsilon 4 ) using MACSYMA. These approximations are valid for values of epsilon near unity. In the spirit of Van Dyke, one can carry out this expansion process, in theory, to apparently arbitrary order for arbitrary but finite decay length ratio. Curiously, the flows induced by axial and radial forces are proportional for asymptotically large source scale heights, chi*. Corresponding isotope separation integral parameters will be given in a companion paper. (author)

  15. Increase in computed tomography in Australia driven mainly by practice change: A decomposition analysis.

    Science.gov (United States)

    Wright, Cameron M; Bulsara, Max K; Norman, Richard; Moorin, Rachael E

    2017-07-01

    Publicly funded computed tomography (CT) procedure descriptions in Australia often specify the body site, rather than indication for use. This study aimed to evaluate the relative contribution of demographic versus non-demographic factors in driving the increase in CT services in Australia. A decomposition analysis was conducted to assess the proportion of additional CT attributable to changing population structure, CT use on a per capita basis (CPC, a proxy for change in practice) and/or cost of CT. Aggregated Medicare usage and billing data were obtained for selected years between 1993/4 and 2012/3. The number of billed CT scans rose from 33 per annum per 1000 of population in 1993/94 (total 572,925) to 112 per 1000 by 2012/13 (total 2,540,546). The respective cost to Medicare rose from $145.7 million to $790.7 million. Change in CPC was the most important factor accounting for changes in CT services (88%) and cost (65%) over the study period. While this study cannot conclude if the increase is appropriate, it does represent a shift in how CT is used, relative to when many CT services were listed for public funding. This 'scope shift' poses questions as to need for and frequency of retrospective/ongoing review of publicly funded services, as medical advances and other demand- or supply-side factors change the way health services are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W. Spencer; Koothoor, Mimitha [Computing and Software Department, McMaster University, Hamilton (Canada)

    2016-04-15

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  17. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.

    2012-10-29

    The preference of experimentally realistic sized 4-nm facetted nanocrystals (NCs), emulating Pb chalcogenide quantum dots, to spontaneously choose a crystal habit for NC superlattices (Face Centered Cubic (FCC) vs. Body Centered Cubic (BCC)) is investigated using molecular simulation approaches. Molecular dynamics simulations, using united atom force fields, are conducted to simulate systems comprised of cube-octahedral-shaped NCs covered by alkyl ligands, in the absence and presence of experimentally used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice and its preference of symmetry, as we vary the ligand length of the chains, from 9 to 24 CH2 groups, and the choice of solvent. We find that hexane and toluene are "good" solvents for the NCs, which penetrate the ligand corona all the way to the NC surfaces. We determine the free energy difference between FCC and BCC NC superlattice symmetries to determine the system\\'s preference for either geometry, as the ratio of the length of the ligand to the diameter of the NC is varied. We explain these preferences in terms of different mechanisms in play, whose relative strength determines the overall choice of geometry. © 2012 Wiley Periodicals, Inc.

  18. Data-Driven Approaches for Computation in Intelligent Biomedical Devices: A Case Study of EEG Monitoring for Chronic Seizure Detection

    Directory of Open Access Journals (Sweden)

    Naveen Verma

    2011-04-01

    Full Text Available Intelligent biomedical devices implies systems that are able to detect specific physiological processes in patients so that particular responses can be generated. This closed-loop capability can have enormous clinical value when we consider the unprecedented modalities that are beginning to emerge for sensing and stimulating patient physiology. Both delivering therapy (e.g., deep-brain stimulation, vagus nerve stimulation, etc. and treating impairments (e.g., neural prosthesis requires computational devices that can make clinically relevant inferences, especially using minimally-intrusive patient signals. The key to such devices is algorithms that are based on data-driven signal modeling as well as hardware structures that are specialized to these. This paper discusses the primary application-domain challenges that must be overcome and analyzes the most promising methods for this that are emerging. We then look at how these methods are being incorporated in ultra-low-energy computational platforms and systems. The case study for this is a seizure-detection SoC that includes instrumentation and computation blocks in support of a system that exploits patient-specific modeling to achieve accurate performance for chronic detection. The SoC samples each EEG channel at a rate of 600 Hz and performs processing to derive signal features on every two second epoch, consuming 9 μJ/epoch/channel. Signal feature extraction reduces the data rate by a factor of over 40×, permitting wireless communication from the patient’s head while reducing the total power on the head by 14×.

  19. SciDAC-Data, A Project to Enabling Data Driven Modeling of Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, M.; Ding, P.; Aliaga, L.; Tsaris, A.; Norman, A.; Lyon, A.; Ross, R.

    2016-10-10

    The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab Data Center on the organization, movement, and consumption of High Energy Physics data. The project will analyze the analysis patterns and data organization that have been used by the NOvA, MicroBooNE, MINERvA and other experiments, to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership class exascale computing facilities. We will address the use of the SciDAC-Data distributions acquired from Fermilab Data Center’s analysis workflows and corresponding to around 71,000 HEP jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in HPC environments. In particular we describe in detail how the Sequential Access via Metadata (SAM) data handling system in combination with the dCache/Enstore based data archive facilities have been analyzed to develop the radically different models of the analysis of HEP data. We present how the simulation may be used to analyze the impact of design choices in archive facilities.

  20. Evaluation of the Effectiveness of a Tablet Computer Application (App) in Helping Students with Visual Impairments Solve Mathematics Problems

    Science.gov (United States)

    Beal, Carole R.; Rosenblum, L. Penny

    2018-01-01

    Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…

  1. The Relationship between Computer and Internet Use and Performance on Standardized Tests by Secondary School Students with Visual Impairments

    Science.gov (United States)

    Zhou, Li; Griffin-Shirley, Nora; Kelley, Pat; Banda, Devender R.; Lan, William Y.; Parker, Amy T.; Smith, Derrick W.

    2012-01-01

    Introduction: The study presented here explored the relationship between computer and Internet use and the performance on standardized tests by secondary school students with visual impairments. Methods: With data retrieved from the first three waves (2001-05) of the National Longitudinal Transition Study-2, the correlational study focused on…

  2. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  3. A Sensory-Driven Trade-Off between Coordinated Motion in Social Prey and a Predator's Visual Confusion.

    Directory of Open Access Journals (Sweden)

    Bertrand H Lemasson

    2016-02-01

    Full Text Available Social animals are capable of enhancing their awareness by paying attention to their neighbors, and prey found in groups can also confuse their predators. Both sides of these sensory benefits have long been appreciated, yet less is known of how the perception of events from the perspectives of both prey and predator can interact to influence their encounters. Here we examined how a visual sensory mechanism impacts the collective motion of prey and, subsequently, how their resulting movements influenced predator confusion and capture ability. We presented virtual prey to human players in a targeting game and measured the speed and accuracy with which participants caught designated prey. As prey paid more attention to neighbor movements their collective coordination increased, yet increases in prey coordination were positively associated with increases in the speed and accuracy of attacks. However, while attack speed was unaffected by the initial state of the prey, accuracy dropped significantly if the prey were already organized at the start of the attack, rather than in the process of self-organizing. By repeating attack scenarios and masking the targeted prey's neighbors we were able to visually isolate them and conclusively demonstrate how visual confusion impacted capture ability. Delays in capture caused by decreased coordination amongst the prey depended upon the collection motion of neighboring prey, while it was primarily the motion of the targets themselves that determined capture accuracy. Interestingly, while a complete loss of coordination in the prey (e.g., a flash expansion caused the greatest delay in capture, such behavior had little effect on capture accuracy. Lastly, while increases in collective coordination in prey enhanced personal risk, traveling in coordinated groups was still better than appearing alone. These findings demonstrate a trade-off between the sensory mechanisms that can enhance the collective properties that

  4. Designing Serious Computer Games for People With Moderate and Advanced Dementia: Interdisciplinary Theory-Driven Pilot Study

    Science.gov (United States)

    Gross, Daniel; Abikhzer, Judith

    2017-01-01

    Background The field of serious games for people with dementia (PwD) is mostly driven by game-design principals typically applied to games created by and for younger individuals. Little has been done developing serious games to help PwD maintain cognition and to support functionality. Objectives We aimed to create a theory-based serious game for PwD, with input from a multi-disciplinary team familiar with aging, dementia, and gaming theory, as well as direct input from end users (the iterative process). Targeting enhanced self-efficacy in daily activities, the goal was to generate a game that is acceptable, accessible and engaging for PwD. Methods The theory-driven game development was based on the following learning theories: learning in context, errorless learning, building on capacities, and acknowledging biological changes—all with the aim to boost self-efficacy. The iterative participatory process was used for game screen development with input of 34 PwD and 14 healthy community dwelling older adults, aged over 65 years. Development of game screens was informed by the bio-psychological aging related disabilities (ie, motor, visual, and perception) as well as remaining neuropsychological capacities (ie, implicit memory) of PwD. At the conclusion of the iterative development process, a prototype game with 39 screens was used for a pilot study with 24 PwD and 14 healthy community dwelling older adults. The game was played twice weekly for 10 weeks. Results Quantitative analysis showed that the average speed of successful screen completion was significantly longer for PwD compared with healthy older adults. Both PwD and controls showed an equivalent linear increase in the speed for task completion with practice by the third session (Pgame engaging and fun. Healthy older adults found the game too easy. Increase in self-reported self-efficacy was documented with PwD only. Conclusions Our study demonstrated that PwD’s speed improved with practice at the same rate

  5. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    Science.gov (United States)

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  6. Computer visualization for enhanced operator performance for advanced nuclear power plants

    International Nuclear Information System (INIS)

    Simon, B.H.; Raghavan, R.

    1993-01-01

    The operators of nuclear power plants are presented with an often uncoordinated and arbitrary array of displays and controls. Information is presented in different formats and on physically dissimilar instruments. In an accident situation, an operator must be very alert to quickly diagnose and respond to the state of the plant as represented by the control room displays. Improvements in display technology and increased automation have helped reduce operator burden; however, too much automation may lead to operator apathy and decreased efficiency. A proposed approach to the human-system interface uses modern graphics technology and advances in computational power to provide a visualization or ''virtual reality'' framework for the operator. This virtual reality comprises a simulated perception of another existence, complete with three-dimensional structures, backgrounds, and objects. By placing the operator in an environment that presents an integrated, graphical, and dynamic view of the plant, his attention is directly engaged. Through computer simulation, the operator can view plant equipment, read local displays, and manipulate controls as if he were in the local area. This process not only keeps an operator involved in plant operation and testing procedures, but also reduces personnel exposure. In addition, operator stress is reduced because, with realistic views of plant areas and equipment, the status of the plant can be accurately grasped without interpreting a large number of displays. Since a single operator can quickly ''visit'' many different plant areas without physically moving from the control room, these techniques are useful in reducing labor requirements for surveillance and maintenance activities. This concept requires a plant dynamic model continuously updated via real-time process monitoring. This model interacts with a three-dimensional, solid-model architectural configuration of the physical plant

  7. Computer animations of color markings reveal the function of visual threat signals in Neolamprologus pulcher.

    Science.gov (United States)

    Balzarini, Valentina; Taborsky, Michael; Villa, Fabienne; Frommen, Joachim G

    2017-02-01

    Visual signals, including changes in coloration and color patterns, are frequently used by animals to convey information. During contests, body coloration and its changes can be used to assess an opponent's state or motivation. Communication of aggressive propensity is particularly important in group-living animals with a stable dominance hierarchy, as the outcome of aggressive interactions determines the social rank of group members. Neolamprologus pulcher is a cooperatively breeding cichlid showing frequent within-group aggression. Both sexes exhibit two vertical black stripes on the operculum that vary naturally in shape and darkness. During frontal threat displays these patterns are actively exposed to the opponent, suggesting a signaling function. To investigate the role of operculum stripes during contests we manipulated their darkness in computer animated pictures of the fish. We recorded the responses in behavior and stripe darkness of test subjects to which these animated pictures were presented. Individuals with initially darker stripes were more aggressive against the animations and showed more operculum threat displays. Operculum stripes of test subjects became darker after exposure to an animation exhibiting a pale operculum than after exposure to a dark operculum animation, highlighting the role of the darkness of this color pattern in opponent assessment. We conclude that (i) the black stripes on the operculum of N. pulcher are a reliable signal of aggression and dominance, (ii) these markings play an important role in opponent assessment, and (iii) 2D computer animations are well suited to elicit biologically meaningful short-term aggressive responses in this widely used model system of social evolution.

  8. Semiquantitative visual approach to scoring lung cancer treatment response using computed tomography: a pilot study.

    Science.gov (United States)

    Gottlieb, Ronald H; Kumar, Prasanna; Loud, Peter; Klippenstein, Donald; Raczyk, Cheryl; Tan, Wei; Lu, Jenny; Ramnath, Nithya

    2009-01-01

    Our objective was to compare a newly developed semiquantitative visual scoring (SVS) method with the current standard, the Response Evaluation Criteria in Solid Tumors (RECIST) method, in the categorization of treatment response and reader agreement for patients with metastatic lung cancer followed by computed tomography. The 18 subjects (5 women and 13 men; mean age, 62.8 years) were from an institutional review board-approved phase 2 study that evaluated a second-line chemotherapy regimen for metastatic (stages III and IV) non-small cell lung cancer. Four radiologists, blinded to the patient outcome and each other's reads, evaluated the change in the patients' tumor burden from the baseline to the first restaging computed tomographic scan using either the RECIST or the SVS method. We compared the numbers of patients placed into the partial response, the stable disease (SD), and the progressive disease (PD) categories (Fisher exact test) and observer agreement (kappa statistic). Requiring the concordance of 3 of the 4 readers resulted in the RECIST placing 17 (100%) of 17 patients in the SD category compared with the SVS placing 9 (60%) of 15 patients in the partial response, 5 (33%) of the 15 patients in the SD, and 1 (6.7%) of the 15 patients in the PD categories (P < 0.0001). Interobserver agreement was higher among the readers using the SVS method (kappa, 0.54; P < 0.0001) compared with that of the readers using the RECIST method (kappa, -0.01; P = 0.5378). Using the SVS method, the readers more finely discriminated between the patient response categories with superior agreement compared with the RECIST method, which could potentially result in large differences in early treatment decisions for advanced lung cancer.

  9. Computer-enhanced visual learning method: a paradigm to teach and document surgical skills.

    Science.gov (United States)

    Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E

    2009-09-01

    Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs.

  10. X-ray micro computed tomography for the visualization of an atherosclerotic human coronary artery

    Science.gov (United States)

    Matviykiv, Sofiya; Buscema, Marzia; Deyhle, Hans; Pfohl, Thomas; Zumbuehl, Andreas; Saxer, Till; Müller, Bert

    2017-06-01

    Atherosclerosis refers to narrowing or blocking of blood vessels that can lead to a heart attack, chest pain or stroke. Constricted segments of diseased arteries exhibit considerably increased wall shear stress, compared to the healthy ones. One of the possibilities to improve patient’s treatment is the application of nano-therapeutic approaches, based on shear stress sensitive nano-containers. In order to tailor the chemical composition and subsequent physical properties of such liposomes, one has to know precisely the morphology of critically stenosed arteries at micrometre resolution. It is often obtained by means of histology, which has the drawback of offering only two-dimensional information. Additionally, it requires the artery to be decalcified before sectioning, which might lead to deformations within the tissue. Micro computed tomography (μCT) enables the three-dimensional (3D) visualization of soft and hard tissues at micrometre level. μCT allows lumen segmentation that is crucial for subsequent flow simulation analysis. In this communication, tomographic images of a human coronary artery before and after decalcification are qualitatively and quantitatively compared. We analyse the cross section of the diseased human coronary artery before and after decalcification, and calculate the lumen area of both samples.

  11. Quantifying the visual appearance of sunscreens applied to the skin using indirect computer image colorimetry.

    Science.gov (United States)

    Richer, Vincent; Kharazmi, Pegah; Lee, Tim K; Kalia, Sunil; Lui, Harvey

    2018-03-01

    There is no accepted method to objectively assess the visual appearance of sunscreens on the skin. We present a method for sunscreen application, digital photography, and computer analysis to quantify the appearance of the skin after sunscreen application. Four sunscreen lotions were applied randomly at densities of 0.5, 1.0, 1.5, and 2.0 mg/cm 2 to areas of the back of 29 subjects. Each application site had a matched contralateral control area. High-resolution standardized photographs including a color card were taken after sunscreen application. After color balance correction, CIE L*a*b* color values were extracted from paired sites. Differences in skin appearance attributed to sunscreen were represented by ΔE, which in turn was calculated from the linear Euclidean distance within the L*a*b* color space between the paired sites. Sunscreen visibility as measured by median ΔE varied across different products and application densities and ranged between 1.2 and 12.1. The visibility of sunscreens varied according to product SPF, composition (organic vs inorganic), presence of tint, and baseline b* of skin (P colorimetry represents a potential method to objectively quantify visibility of sunscreen on the skin. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Isolated unilateral absence of the right pulmonary artery in two cats visualized by computed tomography angiography

    Directory of Open Access Journals (Sweden)

    Tyler JM Jordan

    2016-10-01

    Full Text Available Case series summary Two cats were evaluated for progressive exercise intolerance, dyspnea and unilateral infiltrate of the left lung. Computed tomography angiography (CTA revealed absence of the right pulmonary artery in both cats with systemic arterial collateral vessels perfusing the right segmental pulmonary arteries. In one case, the collateral vessels arose from the esophageal artery, while in the other case they derived off the right costocervical trunk. One cat was diagnosed with pulmonary hypertension and was euthanized owing to progressive respiratory distress despite medical management with sildenafil, pimobendan, clopidogrel and furosemide. The other cat, without echocardiographic evidence of pulmonary hypertension, was successfully managed with furosemide and enalapril for more than 4 years. Relevance and novel information CTA allowed visualization of a rare congenital heart malformation, unilateral absence of the right pulmonary artery, in two cats and accurately characterized the source of collateral blood supply to the affected lung. Severe pulmonary hypertension may be a negative prognostic factor in cats with this condition as medical therapy in the cat without evidence of pulmonary hypertension resolved clinical signs, while the cat with severe pulmonary hypertension died from the disease.

  13. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface.

    Science.gov (United States)

    Maksimenko, Vladimir A; Runnova, Anastasia E; Zhuravlev, Maksim O; Makarov, Vladimir V; Nedayvozov, Vladimir; Grubov, Vadim V; Pchelintceva, Svetlana V; Hramov, Alexander E; Pisarchik, Alexander N

    2017-01-01

    The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG) allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8-12 Hz) with a simultaneous increase in beta-wave activity (20-30 Hz), whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time.

  14. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  15. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Vladimir A Maksimenko

    Full Text Available The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8-12 Hz with a simultaneous increase in beta-wave activity (20-30 Hz, whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time.

  16. Computational modeling of electrically-driven deposition of ionized polydisperse particulate powder mixtures in advanced manufacturing processes

    Science.gov (United States)

    Zohdi, T. I.

    2017-07-01

    A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.

  17. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  18. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  19. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  20. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States)

    2017-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  1. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    Science.gov (United States)

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  2. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    International Nuclear Information System (INIS)

    Helbren, Emma; Taylor, Stuart A.; Fanshawe, Thomas R.; Mallett, Susan; Phillips, Peter; Boone, Darren; Gale, Alastair; Altman, Douglas G.; Manning, David; Halligan, Steve

    2015-01-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  3. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Helbren, Emma; Taylor, Stuart A. [University College London, Centre for Medical Imaging, London (United Kingdom); Fanshawe, Thomas R.; Mallett, Susan [University of Oxford, Nuffield Department of Primary Care Health Sciences, Oxford (United Kingdom); Phillips, Peter [University of Cumbria, Health and Medical Sciences Group, Lancaster (United Kingdom); Boone, Darren [Colchester Hospital University NHS Foundation Trust and Anglia University, Colchester (United Kingdom); Gale, Alastair [Loughborough University, Applied Vision Research Centre, Loughborough (United Kingdom); Altman, Douglas G. [University of Oxford, Centre for Statistics in Medicine, Oxford (United Kingdom); Manning, David [Lancaster University, Lancaster Medical School, Faculty of Health and Medicine, Lancaster (United Kingdom); Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College Hospital, Gastrointestinal Radiology, University College London, Centre for Medical Imaging, Podium Level 2, London, NW1 2BU (United Kingdom)

    2015-06-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  4. Clinical Correlates of Computationally Derived Visual Field Defect Archetypes in Patients from a Glaucoma Clinic.

    Science.gov (United States)

    Cai, Sophie; Elze, Tobias; Bex, Peter J; Wiggs, Janey L; Pasquale, Louis R; Shen, Lucy Q

    2017-04-01

    To assess the clinical validity of visual field (VF) archetypal analysis, a previously developed machine learning method for decomposing any Humphrey VF (24-2) into a weighted sum of clinically recognizable VF loss patterns. For each of 16 previously identified VF loss patterns ("archetypes," denoted AT1 through AT16), we screened 30,995 reliable VFs to select 10-20 representative patients whose VFs had the highest decomposition coefficients for each archetype. VF global indices and patient ocular and demographic features were extracted retrospectively. Based on resemblances between VF archetypes and clinically observed VF patterns, hypotheses were generated for associations between certain VF archetypes and clinical features, such as an association between AT6 (central island, representing severe VF loss) and large cup-to-disk ratio (CDR). Distributions of the selected clinical features were compared between representative eyes of certain archetypes and all other eyes using the two-tailed t-test or Fisher exact test. 243 eyes from 243 patients were included, representative of AT1 through AT16. CDR was more often ≥ 0.7 among eyes representative of AT6 (central island; p = 0.002), AT10 (inferior arcuate defect; p = 0.048), AT14 (superior paracentral defect; p = 0.016), and AT16 (inferior paracentral defect; p = 0.016) than other eyes. CDR was more often 6D (p = 0.069). Shared clinical features between computationally derived VF archetypes and clinically observed VF patterns support the clinical validity of VF archetypal analysis.

  5. Computational Methods for Tracking, Quantitative Assessment, and Visualization of C. elegans Locomotory Behavior.

    Directory of Open Access Journals (Sweden)

    Kyle Moy

    Full Text Available The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the

  6. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering

    International Nuclear Information System (INIS)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L.

    2006-01-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [de

  7. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    Science.gov (United States)

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  8. Modification to the Monte N-Particle (MCNP) Visual Editor (MCNPVised) to read in Computer Aided Design (CAD) files

    International Nuclear Information System (INIS)

    Schwarz, Randy A.; Carter, Leeland L.

    2004-01-01

    Monte Carlo N-Particle Transport Code (MCNP) (Reference 1) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle (References 2 to 11) is recognized internationally as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant enhanced the capabilities of the MCNP Visual Editor to allow it to read in a 2D Computer Aided Design (CAD) file, allowing the user to modify and view the 2D CAD file and then electronically generate a valid MCNP input geometry with a user specified axial extent

  9. Designing Serious Computer Games for People With Moderate and Advanced Dementia: Interdisciplinary Theory-Driven Pilot Study.

    Science.gov (United States)

    Tziraki, Chariklia; Berenbaum, Rakel; Gross, Daniel; Abikhzer, Judith; Ben-David, Boaz M

    2017-07-31

    The field of serious games for people with dementia (PwD) is mostly driven by game-design principals typically applied to games created by and for younger individuals. Little has been done developing serious games to help PwD maintain cognition and to support functionality. We aimed to create a theory-based serious game for PwD, with input from a multi-disciplinary team familiar with aging, dementia, and gaming theory, as well as direct input from end users (the iterative process). Targeting enhanced self-efficacy in daily activities, the goal was to generate a game that is acceptable, accessible and engaging for PwD. The theory-driven game development was based on the following learning theories: learning in context, errorless learning, building on capacities, and acknowledging biological changes-all with the aim to boost self-efficacy. The iterative participatory process was used for game screen development with input of 34 PwD and 14 healthy community dwelling older adults, aged over 65 years. Development of game screens was informed by the bio-psychological aging related disabilities (ie, motor, visual, and perception) as well as remaining neuropsychological capacities (ie, implicit memory) of PwD. At the conclusion of the iterative development process, a prototype game with 39 screens was used for a pilot study with 24 PwD and 14 healthy community dwelling older adults. The game was played twice weekly for 10 weeks. Quantitative analysis showed that the average speed of successful screen completion was significantly longer for PwD compared with healthy older adults. Both PwD and controls showed an equivalent linear increase in the speed for task completion with practice by the third session (Pgame engaging and fun. Healthy older adults found the game too easy. Increase in self-reported self-efficacy was documented with PwD only. Our study demonstrated that PwD's speed improved with practice at the same rate as healthy older adults. This implies that when tasks

  10. A clinically driven variant prioritization framework outperforms purely computational approaches for the diagnostic analysis of singleton WES data.

    Science.gov (United States)

    Stark, Zornitza; Dashnow, Harriet; Lunke, Sebastian; Tan, Tiong Y; Yeung, Alison; Sadedin, Simon; Thorne, Natalie; Macciocca, Ivan; Gaff, Clara; Oshlack, Alicia; White, Susan M; James, Paul A

    2017-11-01

    Rapid identification of clinically significant variants is key to the successful application of next generation sequencing technologies in clinical practice. The Melbourne Genomics Health Alliance (MGHA) variant prioritization framework employs a gene prioritization index based on clinician-generated a priori gene lists, and a variant prioritization index (VPI) based on rarity, conservation and protein effect. We used data from 80 patients who underwent singleton whole exome sequencing (WES) to test the ability of the framework to rank causative variants highly, and compared it against the performance of other gene and variant prioritization tools. Causative variants were identified in 59 of the patients. Using the MGHA prioritization framework the average rank of the causative variant was 2.24, with 76% ranked as the top priority variant, and 90% ranked within the top five. Using clinician-generated gene lists resulted in ranking causative variants an average of 8.2 positions higher than prioritization based on variant properties alone. This clinically driven prioritization approach significantly outperformed purely computational tools, placing a greater proportion of causative variants top or in the top 5 (permutation P-value=0.001). Clinicians included 40 of the 49 WES diagnoses in their a priori list of differential diagnoses (81%). The lists generated by PhenoTips and Phenomizer contained 14 (29%) and 18 (37%) of these diagnoses respectively. These results highlight the benefits of clinically led variant prioritization in increasing the efficiency of singleton WES data analysis and have important implications for developing models for the funding and delivery of genomic services.

  11. Thunderstorms in my computer : The effect of visual dynamics and sound in a 3D environment

    NARCIS (Netherlands)

    Houtkamp, J.; Schuurink, E.L.; Toet, A.

    2008-01-01

    We assessed the effects of the addition of dynamic visual elements and sounds to a levee patroller training game on the appraisal of the environment and weather conditions, the engagement of the users and their performance. Results show that the combination of visual dynamics and sounds best conveys

  12. Computers, Automation, and the Employment of Persons Who Are Blind or Visually Impaired.

    Science.gov (United States)

    Mather, J.

    1994-01-01

    This article discusses the impact of technology on the formation of skills and the career advancement of persons who are blind or visually impaired. It concludes that dependence on technology (computerization and automation) and the mechanistic aspects of jobs may trap blind and visually impaired workers in occupations with narrow career paths…

  13. Flow velocity-driven differentiation of human mesenchymal stromal cells in silk fibroin scaffolds: A combined experimental and computational approach.

    Directory of Open Access Journals (Sweden)

    Jolanda Rita Vetsch

    Full Text Available Mechanical loading plays a major role in bone remodeling and fracture healing. Mimicking the concept of mechanical loading of bone has been widely studied in bone tissue engineering by perfusion cultures. Nevertheless, there is still debate regarding the in-vitro mechanical stimulation regime. This study aims at investigating the effect of two different flow rates (vlow = 0.001m/s and vhigh = 0.061m/s on the growth of mineralized tissue produced by human mesenchymal stromal cells cultured on 3-D silk fibroin scaffolds. The flow rates applied were chosen to mimic the mechanical environment during early fracture healing or during bone remodeling, respectively. Scaffolds cultured under static conditions served as a control. Time-lapsed micro-computed tomography showed that mineralized extracellular matrix formation was completely inhibited at vlow compared to vhigh and the static group. Biochemical assays and histology confirmed these results and showed enhanced osteogenic differentiation at vhigh whereas the amount of DNA was increased at vlow. The biological response at vlow might correspond to the early stage of fracture healing, where cell proliferation and matrix production is prominent. Visual mapping of shear stresses, simulated by computational fluid dynamics, to 3-D micro-computed tomography data revealed that shear stresses up to 0.39mPa induced a higher DNA amount and shear stresses between 0.55mPa and 24mPa induced osteogenic differentiation. This study demonstrates the feasibility to drive cell behavior of human mesenchymal stromal cells by the flow velocity applied in agreement with mechanical loading mimicking early fracture healing (vlow or bone remodeling (vhigh. These results can be used in the future to tightly control the behavior of human mesenchymal stromal cells towards proliferation or differentiation. Additionally, the combination of experiment and simulation presented is a strong tool to link biological responses to

  14. Orientation-modulated attention effect on visual evoked potential: Application for PIN system using brain-computer interface.

    Science.gov (United States)

    Wilaiprasitporn, Theerawit; Yagi, Tohru

    2015-01-01

    This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.

  15. Creating the computer player: an engaging and collaborative approach to introduce computational thinking by combining ‘unplugged’ activities with visual programming

    Directory of Open Access Journals (Sweden)

    Anna Gardeli

    2017-11-01

    Full Text Available Ongoing research is being conducted on appropriate course design, practices and teacher interventions for improving the efficiency of computer science and programming courses in K-12 education. The trend is towards a more constructivist problem-based learning approach. Computational thinking, which refers to formulating and solving problems in a form that can be efficiently processed by a computer, raises an important educational challenge. Our research aims to explore possible ways of enriching computer science teaching with a focus on development of computational thinking. We have prepared and evaluated a learning intervention for introducing computer programming to children between 10 and 14 years old; this involves students working in groups to program the behavior of the computer player of a well-known game. The programming process is split into two parts. First, students design a high-level version of their algorithm during an ‘unplugged’ pen & paper phase, and then they encode their solution as an executable program in a visual programming environment. Encouraging evaluation results have been achieved regarding the educational and motivational value of the proposed approach.

  16. A computer-assisted test for the electrophysiological and psychophysical measurement of dynamic visual function based on motion contrast.

    Science.gov (United States)

    Wist, E R; Ehrenstein, W H; Schrauf, M; Schraus, M

    1998-03-13

    A new test is described that allows for electrophysiological and psychophysical measurement of visual function based on motion contrast. In a computer-generated random-dot display, completely camouflaged Landolt rings become visible only when dots within the target area are moved briefly while those of the background remain stationary. Thus, detection of contours and the location of the gap in the ring rely on motion contrast (form-from-motion) instead of luminance contrast. A standard version of this test has been used to assess visual performance in relation to age, in screening professional groups (truck drivers) and in clinical groups (glaucoma patients). Aside from this standard version, the computer program easily allows for various modifications. These include the option of a synchronizing trigger signal to allow for recording of time-locked motion-onset visual-evoked responses, the reversal of target and background motion, and the displacement of random-dot targets across stationary backgrounds. In all instances, task difficulty is manipulated by changing the percentage of moving dots within the target (or background). The present test offers a short, convenient method to probe dynamic visual functions relying on surprathreshold motion-contrast stimuli and complements other routine tests of form, contrast, depth, and color vision.

  17. Data management, code deployment, and scientific visualization to enhance scientific discovery in fusion research through advanced computing

    International Nuclear Information System (INIS)

    Schissel, D.P.; Finkelstein, A.; Foster, I.T.; Fredian, T.W.; Greenwald, M.J.; Hansen, C.D.; Johnson, C.R.; Keahey, K.; Klasky, S.A.; Li, K.; McCune, D.C.; Peng, Q.; Stevens, R.; Thompson, M.R.

    2002-01-01

    The long-term vision of the Fusion Collaboratory described in this paper is to transform fusion research and accelerate scientific understanding and innovation so as to revolutionize the design of a fusion energy source. The Collaboratory will create and deploy collaborative software tools that will enable more efficient utilization of existing experimental facilities and more effective integration of experiment, theory, and modeling. The computer science research necessary to create the Collaboratory is centered on three activities: security, remote and distributed computing, and scientific visualization. It is anticipated that the presently envisioned Fusion Collaboratory software tools will require 3 years to complete

  18. simEye: computer-based simulation of visual perception under various eye defects using Zernike polynomials

    OpenAIRE

    Fink, Wolfgang; Micol, Daniel

    2006-01-01

    We describe a computer eye model that allows for aspheric surfaces and a three-dimensional computer-based ray-tracing technique to simulate optical properties of the human eye and visual perception under various eye defects. Eye surfaces, such as the cornea, eye lens, and retina, are modeled or approximated by a set of Zernike polynomials that are fitted to input data for the respective surfaces. A ray-tracing procedure propagates light rays using Snell’s law of refraction from an input objec...

  19. Is the preference of natural versus man-made scenes driven by bottom-up processing of the visual features of nature?

    Directory of Open Access Journals (Sweden)

    Omid eKardan

    2015-04-01

    Full Text Available Previous research has shown that viewing images of nature scenes can have a beneficial effect on memory, attention and mood. In this study we aimed to determine whether the preference of natural versus man-made scenes is driven by bottom-up processing of the low-level visual features of nature. We used participants’ ratings of perceived naturalness as well as aesthetic preference for 307 images with varied natural and urban content. We then quantified ten low-level image features for each image (a combination of spatial and color properties. These features were used to predict aesthetic preference in the images, as well as to decompose perceived naturalness to its predictable (modelled by the low-level visual features and non-modelled aspects. Interactions of these separate aspects of naturalness with the time it took to make a preference judgment showed that naturalness based on low-level features related more to preference when the judgment was faster (bottom-up. On the other hand perceived naturalness that was not modelled by low-level features was related more to preference when the judgment was slower. A quadratic discriminant classification analysis showed how relevant each aspect of naturalness (modelled and non-modelled was to predicting preference ratings, as well as the image features on their own. Finally, we compared the effect of color-related and structure-related modelled naturalness, and the remaining unmodelled naturalness in predicting aesthetic preference. In summary bottom-up (color and spatial properties of natural images captured by our features and the non-modelled naturalness are important to aesthetic judgments of natural and man-made scenes, with each predicting unique variance.

  20. Recent Advances in Immersive Visualization of Ocean Data: Virtual Reality Through the Web on Your Laptop Computer

    Science.gov (United States)

    Hermann, A. J.; Moore, C.; Soreide, N. N.

    2002-12-01

    Ocean circulation is irrefutably three dimensional, and powerful new measurement technologies and numerical models promise to expand our three-dimensional knowledge of the dynamics further each year. Yet, most ocean data and model output is still viewed using two-dimensional maps. Immersive visualization techniques allow the investigator to view their data as a three dimensional world of surfaces and vectors which evolves through time. The experience is not unlike holding a part of the ocean basin in one's hand, turning and examining it from different angles. While immersive, three dimensional visualization has been possible for at least a decade, the technology was until recently inaccessible (both physically and financially) for most researchers. It is not yet fully appreciated by practicing oceanographers how new, inexpensive computing hardware and software (e.g. graphics cards and controllers designed for the huge PC gaming market) can be employed for immersive, three dimensional, color visualization of their increasingly huge datasets and model output. In fact, the latest developments allow immersive visualization through web servers, giving scientists the ability to "fly through" three-dimensional data stored half a world away. Here we explore what additional insight is gained through immersive visualization, describe how scientists of very modest means can easily avail themselves of the latest technology, and demonstrate its implementation on a web server for Pacific Ocean model output.

  1. Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface

    Science.gov (United States)

    Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross

    2012-06-01

    The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.

  2. Comparison of onboard low-field magnetic resonance imaging versus onboard computed tomography for anatomy visualization in radiotherapy.

    Science.gov (United States)

    Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R

    2015-01-01

    Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.

  3. Stability and economy analysis based on computational fluid dynamics and field testing of hybrid-driven underwater glider with the water quality sensor in Danjiangkou Reservoir

    Directory of Open Access Journals (Sweden)

    Chao Li

    2015-12-01

    Full Text Available Hybrid-driven underwater glider is a new kind of unmanned platform for water quality monitoring. It has advantages such as high controllability and maneuverability, low cost, easy operation, and ability to carry multiple sensors. This article develops a hybrid-driven underwater glider, PETRELII, and integrates a water quality monitoring sensor. Considering stability and economy, an optimal layout scheme is selected from four candidates by simulation using computational fluid dynamics method. Trials were carried out in Danjiangkou Reservoir—important headwaters of the Middle Route of the South-to-North Water Diversion Project. In the trials, a monitoring strategy with polygonal mixed-motion was adopted to make full use of the advantages of the unmanned platform. The measuring data, including temperature, dissolved oxygen, conductivity, pH, turbidity, chlorophyll, and ammonia nitrogen, are obtained. These data validate the practicability of the theoretical layout obtained using computational fluid dynamics method and the practical performance of PETRELII with sensor.

  4. Centrifuge in space fluid flow visualization experiment

    Science.gov (United States)

    Arnold, William A.; Wilcox, William R.; Regel, Liya L.; Dunbar, Bonnie J.

    1993-01-01

    A prototype flow visualization system is constructed to examine buoyancy driven flows during centrifugation in space. An axial density gradient is formed by imposing a thermal gradient between the two ends of the test cell. Numerical computations for this geometry showed that the Prandtl number plays a limited part in determining the flow.

  5. Combining computational analyses and interactive visualization for document exploration and sensemaking in jigsaw.

    Science.gov (United States)

    Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John

    2013-10-01

    Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.

  6. The role of visualization in learning from computer-based images

    Science.gov (United States)

    Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.

    2005-05-01

    Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and experimental sections were administered measures of spatial orientation and visualization, as well as a content-based geospatial examination. All subjects improved significantly in their scores on spatial visualization and the geospatial examination. There was no change in their scores on spatial orientation. A three-way analysis of variance, with the geospatial examination as the dependent variable, revealed significant main effects favoring the experimental group and a significant interaction between treatment and gender. These results demonstrate that spatial ability can be improved through instruction, that learning of geological content will improve as a result, and that differences in performance between the genders can be eliminated.

  7. A malaria diagnostic tool based on computer vision screening and visualization of Plasmodium falciparum candidate areas in digitized blood smears.

    Directory of Open Access Journals (Sweden)

    Nina Linder

    Full Text Available INTRODUCTION: Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. METHODS: Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27 and uninfected controls (n = 20 were digitally scanned with an oil immersion objective (0.1 µm/pixel to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. RESULTS: The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls. From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. CONCLUSION: We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  9. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  10. A Computational Analysis of the Function of Three Inhibitory Cell Types in Contextual Visual Processing

    Directory of Open Access Journals (Sweden)

    Jung H. Lee

    2017-04-01

    Full Text Available Most cortical inhibitory cell types exclusively express one of three genes, parvalbumin, somatostatin and 5HT3a. We conjecture that these three inhibitory neuron types possess distinct roles in visual contextual processing based on two observations. First, they have distinctive synaptic sources and targets over different spatial extents and from different areas. Second, the visual responses of cortical neurons are affected not only by local cues, but also by visual context. We use modeling to relate structural information to function in primary visual cortex (V1 of the mouse, and investigate their role in contextual visual processing. Our findings are three-fold. First, the inhibition mediated by parvalbumin positive (PV cells mediates local processing and could underlie their role in boundary detection. Second, the inhibition mediated by somatostatin-positive (SST cells facilitates longer range spatial competition among receptive fields. Third, non-specific top-down modulation to interneurons expressing vasoactive intestinal polypeptide (VIP, a subclass of 5HT3a neurons, can selectively enhance V1 responses.

  11. A single photon emission computed tomograph based on a limited dumber of detectors for fluid flow visualization

    International Nuclear Information System (INIS)

    Legoupil, S.

    1999-01-01

    We present in this work a method for fluid flow visualization in a system using radioactive tracers. The method is based on single photon emission computed tomography techniques, applied to a limited number of discrete detectors. We propose in this work a method for the estimation of the transport matrix of photons, associated to the acquisition system. This method is based on the modelization of profiles acquired for a set of point sources located in the imaged volume. Monte Carlo simulations allow to separate scattered photons from those directly collected by the system. The influence of the energy tracer is exposed. The reconstruction method is based on the maximum likelihood - expectation maximization algorithm. An experimental device, based on 36 detectors was realised for the visualization of water circulation in a vessel. A video monitoring allows to visualize the dye water tracer. Dye and radioactive tracers are injected simultaneously in a water flow circulating in the vessel. Reconstructed and video images are compared. Quantitative and qualitative analysis show that fluid flow visualization is feasible with a limited number of detectors. This method can be applied for system involving circulations of fluids. (author)

  12. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    Directory of Open Access Journals (Sweden)

    Wilbert A. McClay

    2015-09-01

    Full Text Available Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  13. Brain circuits underlying visual stability across eye movements - converging evidence for a neuro-computational model of area LIP

    Directory of Open Access Journals (Sweden)

    Arnold eZiesche

    2014-03-01

    Full Text Available The understanding of the subjective experience of a visually stable world despite the occurrence of an observer's eye movements has been the focus of extensive research for over 20 years. These studies have revealed fundamental mechanisms such as anticipatory receptive field shifts and the saccadic suppression of stimulus displacements, yet there currently exists no single explanatory framework for these observations. We show that a previously presented neuro-computational model of peri-saccadic mislocalization accounts for the phenomenon of predictive remapping and for the observation of saccadic suppression of displacement (SSD. This converging evidence allows us to identify the potential ingredients of perceptual stability that generalize beyond different data sets in a formal physiology-based model. In particular we propose that predictive remapping stabilizes the visual world across saccades by introducing a feedback loop and, as an emergent result, small displacements of stimuli are not noticed by the visual system. The model provides a link from neural dynamics, to neural mechanism and finally to behavior, and thus offers a testable comprehensive framework of visual stability.

  14. The advanced role of computational mechanics and visualization in science and technology: analysis of the Germanwings Flight 9525 crash

    International Nuclear Information System (INIS)

    Chen, Goong; Wang, Yi-Ching; Gu, Cong; Perronnet, Alain; Yao, Pengfei; Bin-Mohsin, Bandar; Hajaiej, Hichem; Scully, Marlan O

    2017-01-01

    Computational mathematics, physics and engineering form a major constituent of modern computational science, which now stands on an equal footing with the established branches of theoretical and experimental sciences. Computational mechanics solves problems in science and engineering based upon mathematical modeling and computing, bypassing the need for expensive and time-consuming laboratory setups and experimental measurements. Furthermore, it allows the numerical simulations of large scale systems, such as the formation of galaxies that could not be done in any earth bound laboratories. This article is written as part of the 21st Century Frontiers Series to illustrate some state-of-the-art computational science. We emphasize how to do numerical modeling and visualization in the study of a contemporary event, the pulverizing crash of the Germanwings Flight 9525 on March 24, 2015, as a showcase. Such numerical modeling and the ensuing simulation of aircraft crashes into land or mountain are complex tasks as they involve both theoretical study and supercomputing of a complex physical system. The most tragic type of crash involves ‘pulverization’ such as the one suffered by this Germanwings flight. Here, we show pulverizing airliner crashes by visualization through video animations from supercomputer applications of the numerical modeling tool LS-DYNA. A sound validation process is challenging but essential for any sophisticated calculations. We achieve this by validation against the experimental data from a crash test done in 1993 of an F4 Phantom II fighter jet into a wall. We have developed a method by hybridizing two primary methods: finite element analysis and smoothed particle hydrodynamics . This hybrid method also enhances visualization by showing a ‘debris cloud’. Based on our supercomputer simulations and the visualization, we point out that prior works on this topic based on ‘hollow interior’ modeling can be quite problematic and, thus, not

  15. The advanced role of computational mechanics and visualization in science and technology: analysis of the Germanwings Flight 9525 crash

    Science.gov (United States)

    Chen, Goong; Wang, Yi-Ching; Perronnet, Alain; Gu, Cong; Yao, Pengfei; Bin-Mohsin, Bandar; Hajaiej, Hichem; Scully, Marlan O.

    2017-03-01

    Computational mathematics, physics and engineering form a major constituent of modern computational science, which now stands on an equal footing with the established branches of theoretical and experimental sciences. Computational mechanics solves problems in science and engineering based upon mathematical modeling and computing, bypassing the need for expensive and time-consuming laboratory setups and experimental measurements. Furthermore, it allows the numerical simulations of large scale systems, such as the formation of galaxies that could not be done in any earth bound laboratories. This article is written as part of the 21st Century Frontiers Series to illustrate some state-of-the-art computational science. We emphasize how to do numerical modeling and visualization in the study of a contemporary event, the pulverizing crash of the Germanwings Flight 9525 on March 24, 2015, as a showcase. Such numerical modeling and the ensuing simulation of aircraft crashes into land or mountain are complex tasks as they involve both theoretical study and supercomputing of a complex physical system. The most tragic type of crash involves ‘pulverization’ such as the one suffered by this Germanwings flight. Here, we show pulverizing airliner crashes by visualization through video animations from supercomputer applications of the numerical modeling tool LS-DYNA. A sound validation process is challenging but essential for any sophisticated calculations. We achieve this by validation against the experimental data from a crash test done in 1993 of an F4 Phantom II fighter jet into a wall. We have developed a method by hybridizing two primary methods: finite element analysis and smoothed particle hydrodynamics. This hybrid method also enhances visualization by showing a ‘debris cloud’. Based on our supercomputer simulations and the visualization, we point out that prior works on this topic based on ‘hollow interior’ modeling can be quite problematic and, thus, not

  16. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    Science.gov (United States)

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  17. A Visualization Review of Cloud Computing Algorithms in the Last Decade

    OpenAIRE

    Junhu Ruan; Felix T. S. Chan; Fangwei Zhu; Xuping Wang; Jing Yang

    2016-01-01

    Cloud computing has competitive advantages—such as on-demand self-service, rapid computing, cost reduction, and almost unlimited storage—that have attracted extensive attention from both academia and industry in recent years. Some review works have been reported to summarize extant studies related to cloud computing, but few analyze these studies based on the citations. Co-citation analysis can provide scholars a strong support to identify the intellectual bases and leading edges of a specifi...

  18. A computational model of fMRI activity in the intraparietal sulcus that supports visual working memory.

    Science.gov (United States)

    Domijan, Dražen

    2011-12-01

    A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects' locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects' features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects' complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.

  19. Continued use of an interactive computer game-based visual perception learning system in children with developmental delay.

    Science.gov (United States)

    Lin, Hsien-Cheng; Chiu, Yu-Hsien; Chen, Yenming J; Wuang, Yee-Pay; Chen, Chiu-Ping; Wang, Chih-Chung; Huang, Chien-Ling; Wu, Tang-Meng; Ho, Wen-Hsien

    2017-11-01

    This study developed an interactive computer game-based visual perception learning system for special education children with developmental delay. To investigate whether perceived interactivity affects continued use of the system, this study developed a theoretical model of the process in which learners decide whether to continue using an interactive computer game-based visual perception learning system. The technology acceptance model, which considers perceived ease of use, perceived usefulness, and perceived playfulness, was extended by integrating perceived interaction (i.e., learner-instructor interaction and learner-system interaction) and then analyzing the effects of these perceptions on satisfaction and continued use. Data were collected from 150 participants (rehabilitation therapists, medical paraprofessionals, and parents of children with developmental delay) recruited from a single medical center in Taiwan. Structural equation modeling and partial-least-squares techniques were used to evaluate relationships within the model. The modeling results indicated that both perceived ease of use and perceived usefulness were positively associated with both learner-instructor interaction and learner-system interaction. However, perceived playfulness only had a positive association with learner-system interaction and not with learner-instructor interaction. Moreover, satisfaction was positively affected by perceived ease of use, perceived usefulness, and perceived playfulness. Thus, satisfaction positively affects continued use of the system. The data obtained by this study can be applied by researchers, designers of computer game-based learning systems, special education workers, and medical professionals. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. On the Efficacy of a Computer-Based Program to Teach Visual Braille Reading

    Science.gov (United States)

    Scheithauer, Mindy C.; Tiger, Jeffrey H.; Miller, Sarah J.

    2013-01-01

    Scheithauer and Tiger (2012) created an efficient computerized program that taught 4 sighted college students to select text letters when presented with visual depictions of braille alphabetic characters and resulted in the emergence of some braille reading. The current study extended these results to a larger sample (n?=?81) and compared the…

  1. Graphic Design for the Computer Age; Visual Communication for all Media.

    Science.gov (United States)

    Hamilton, Edward A.

    Because of the rapid pace of today's world, graphic designs which communicate at a glance are needed in all information areas. The essays in this book deal with various aspects of graphic design. These brief essays, each illustrated with graphics, concern the following topics: a short history of visual communication, information design, the merits…

  2. Helping students revise disruptive experientially supported ideas about thermodynamics: Computer visualizations and tactile models

    Science.gov (United States)

    Clark, Douglas; Jorde, Doris

    2004-01-01

    This study analyzes the impact of an integrated sensory model within a thermal equilibrium visualization. We hypothesized that this intervention would not only help students revise their disruptive experientially supported ideas about why objects feel hot or cold, but also increase their understanding of thermal equilibrium. The analysis synthesizes test data and interviews to measure the impact of this strategy. Results show that students in the experimental tactile group significantly outperform their control group counterparts on posttests and delayed posttests, not only on tactile explanations, but also on thermal equilibrium explanations. Interview transcripts of experimental and control group students corroborate these findings. Discussion addresses improving the tactile model as well as application of the strategy to other science topics. The discussion also considers possible incorporation of actual kinetic or thermal haptic feedback to reinforce the current audio and visual feedback of the visualization. This research builds on the conceptual change literature about the nature and role of students' experientially supported ideas as well as our understanding of curriculum and visualization design to support students in learning about thermodynamics, a science topic on which students perform poorly as shown by the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Study (TIMSS) studies.

  3. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    Science.gov (United States)

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  4. Visual Cluster Analysis for Computing Tasks at Workflow Management System of the ATLAS Experiment

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2018-01-01

    Hundreds of petabytes of experimental data in high energy and nuclear physics (HENP) have already been obtained by unique scientific facilities such as LHC, RHIC, KEK. As the accelerators are being modernized (energy and luminosity were increased), data volumes are rapidly growing and have reached the exabyte scale, that also affects the increasing the number of analysis and data processing tasks, that are competing continuously for computational resources. The increase of processing tasks causes an increase in the performance of the computing environment by the involvement of high-performance computing resources, and forming a heterogeneous distributed computing environment (hundreds of distributed computing centers). In addition, errors happen to occur while executing tasks for data analysis and processing, which are caused by software and hardware failures. With a distributed model of data processing and analysis, the optimization of data management and workload systems becomes a fundamental task, and the ...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  6. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    Science.gov (United States)

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  7. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    Science.gov (United States)

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Ubiquitous Computing: Using everyday object as ambient visualization tools for persuasive design

    OpenAIRE

    Cahier, Jenny; Gullberg, Eric

    2008-01-01

    In order for companies to survive and advance in today’s competitive society, a massive amount of personal information from citizens is gathered. This thesis investigates how these digital footprints can be obtained and visualized to create awareness about personal actions and encourage change in behavior . In order to decide which data would be interesting and accessible, a map of possible application fields was generated and one single field was chosen for further study. The result is a bus...

  9. Visual tracking for multi-modality computer-assisted image guidance

    Science.gov (United States)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  10. On the relationship between optical variability, visual saliency, and eye fixations: a computational approach.

    Science.gov (United States)

    Garcia-Diaz, Antón; Leborán, Víctor; Fdez-Vidal, Xosé R; Pardo, Xosé M

    2012-06-12

    A hierarchical definition of optical variability is proposed that links physical magnitudes to visual saliency and yields a more reductionist interpretation than previous approaches. This definition is shown to be grounded on the classical efficient coding hypothesis. Moreover, we propose that a major goal of contextual adaptation mechanisms is to ensure the invariance of the behavior that the contribution of an image point to optical variability elicits in the visual system. This hypothesis and the necessary assumptions are tested through the comparison with human fixations and state-of-the-art approaches to saliency in three open access eye-tracking datasets, including one devoted to images with faces, as well as in a novel experiment using hyperspectral representations of surface reflectance. The results on faces yield a significant reduction of the potential strength of semantic influences compared to previous works. The results on hyperspectral images support the assumptions to estimate optical variability. As well, the proposed approach explains quantitative results related to a visual illusion observed for images of corners, which does not involve eye movements.

  11. Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations.

    Science.gov (United States)

    Giese, Martin A; Rizzolatti, Giacomo

    2015-10-07

    Action recognition has received enormous interest in the field of neuroscience over the last two decades. In spite of this interest, the knowledge in terms of fundamental neural mechanisms that provide constraints for underlying computations remains rather limited. This fact stands in contrast with a wide variety of speculative theories about how action recognition might work. This review focuses on new fundamental electrophysiological results in monkeys, which provide constraints for the detailed underlying computations. In addition, we review models for action recognition and processing that have concrete mathematical implementations, as opposed to conceptual models. We think that only such implemented models can be meaningfully linked quantitatively to physiological data and have a potential to narrow down the many possible computational explanations for action recognition. In addition, only concrete implementations allow judging whether postulated computational concepts have a feasible implementation in terms of realistic neural circuits. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Education in interactive media: a survey on the potentials of computers for visual literacy

    OpenAIRE

    Güleryüz, Hakan

    1996-01-01

    Ankara : Bilkent University, Department of Graphic Design and Institute of Fine Arts, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves 89-94. This study aims at investigating the potentials of multimedia and computers in design. For this purpose, a general survey on the historical development of computers for their use in education and possibilities related to the use of technology in education is conducted. Based on this survey, the dep...

  13. Identifying shared genetic structure patterns among Pacific Northwest forest taxa: insights from use of visualization tools and computer simulations.

    Directory of Open Access Journals (Sweden)

    Mark P Miller

    2010-10-01

    Full Text Available Identifying causal relationships in phylogeographic and landscape genetic investigations is notoriously difficult, but can be facilitated by use of multispecies comparisons.We used data visualizations to identify common spatial patterns within single lineages of four taxa inhabiting Pacific Northwest forests (northern spotted owl: Strix occidentalis caurina; red tree vole: Arborimus longicaudus; southern torrent salamander: Rhyacotriton variegatus; and western white pine: Pinus monticola. Visualizations suggested that, despite occupying the same geographical region and habitats, species responded differently to prevailing historical processes. S. o. caurina and P. monticola demonstrated directional patterns of spatial genetic structure where genetic distances and diversity were greater in southern versus northern locales. A. longicaudus and R. variegatus displayed opposite patterns where genetic distances were greater in northern versus southern regions. Statistical analyses of directional patterns subsequently confirmed observations from visualizations. Based upon regional climatological history, we hypothesized that observed latitudinal patterns may have been produced by range expansions. Subsequent computer simulations confirmed that directional patterns can be produced by expansion events.We discuss phylogeographic hypotheses regarding historical processes that may have produced observed patterns. Inferential methods used here may become increasingly powerful as detailed simulations of organisms and historical scenarios become plausible. We further suggest that inter-specific comparisons of historical patterns take place prior to drawing conclusions regarding effects of current anthropogenic change within landscapes.

  14. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    Science.gov (United States)

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  15. Recommended practice for the design of a computer driven Alarm Display Facility for central control rooms of nuclear power generating stations

    International Nuclear Information System (INIS)

    Ben-Yaacov, G.

    1984-01-01

    This paper's objective is to explain the process by which design can prevent human errors in nuclear plant operation. Human factor engineering principles, data, and methods used in the design of computer driven alarm display facilities are discussed. A ''generic'', advanced Alarm Display Facility is described. It considers operator capabilities and limitations in decision-making processes, response dynamics, and human memory limitations. Highlighted are considerations of human factor criteria in the designing and layout of alarm displays. Alarm data sources are described, and their use within the Alarm Display Facility are illustrated

  16. [A wireless smart home system based on brain-computer interface of steady state visual evoked potential].

    Science.gov (United States)

    Zhao, Li; Xing, Xiao; Guo, Xuhong; Liu, Zehua; He, Yang

    2014-10-01

    Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.

  17. Visual computed tomographic scoring of emphysema and its correlation with its diagnostic electrocardiographic sign: the frontal P vector.

    Science.gov (United States)

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David H

    2012-03-01

    Verticalization of the frontal P vector in patients older than 45 years is virtually diagnostic of pulmonary emphysema (sensitivity, 96%; specificity, 87%). We investigated the correlation of P vector and the computed tomographic visual score of emphysema (VSE) in patients with established diagnosis of chronic obstructive pulmonary disease/emphysema. High-resolution computed tomographic scans of 26 patients with emphysema (age, >45 years) were reviewed to assess the type and extent of emphysema using the subjective visual scoring. Electrocardiograms were independently reviewed to determine the frontal P vector. The P vector and VSE were compared for statistical correlation. Both P vector and VSE were also directly compared with the forced expiratory volume at 1 second. The VSE and the orientation of the P vector (ÂP) had an overall significant positive correlation (r = +0.68; P = .0001) in all patients, but the correlation was very strong in patients with predominant lower-lobe emphysema (r = +0.88; P = .0004). Forced expiratory volume at 1 second and ÂP had almost a linear inverse correlation in predominant lower-lobe emphysema (r = -0.92; P vertical ÂP and predominant lower-lobe emphysema reflects severe obstructive lung dysfunction. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Simultaneous detection of P300 and steady-state visually evoked potentials for hybrid brain-computer interface.

    Science.gov (United States)

    Combaz, Adrien; Van Hulle, Marc M

    2015-01-01

    We study the feasibility of a hybrid Brain-Computer Interface (BCI) combining simultaneous visual oddball and Steady-State Visually Evoked Potential (SSVEP) paradigms, where both types of stimuli are superimposed on a computer screen. Potentially, such a combination could result in a system being able to operate faster than a purely P300-based BCI and encode more targets than a purely SSVEP-based BCI. We analyse the interactions between the brain responses of the two paradigms, and assess the possibility to detect simultaneously the brain activity evoked by both paradigms, in a series of 3 experiments where EEG data are analysed offline. Despite differences in the shape of the P300 response between pure oddball and hybrid condition, we observe that the classification accuracy of this P300 response is not affected by the SSVEP stimulation. We do not observe either any effect of the oddball stimulation on the power of the SSVEP response in the frequency of stimulation. Finally results from the last experiment show the possibility of detecting both types of brain responses simultaneously and suggest not only the feasibility of such hybrid BCI but also a gain over pure oddball- and pure SSVEP-based BCIs in terms of communication rate.

  19. A 3-D Approach for Teaching and Learning about Surface Water Systems through Computational Thinking, Data Visualization and Physical Models

    Science.gov (United States)

    Caplan, B.; Morrison, A.; Moore, J. C.; Berkowitz, A. R.

    2017-12-01

    Understanding water is central to understanding environmental challenges. Scientists use `big data' and computational models to develop knowledge about the structure and function of complex systems, and to make predictions about changes in climate, weather, hydrology, and ecology. Large environmental systems-related data sets and simulation models are difficult for high school teachers and students to access and make sense of. Comp Hydro, a collaboration across four states and multiple school districts, integrates computational thinking and data-related science practices into water systems instruction to enhance development of scientific model-based reasoning, through curriculum, assessment and teacher professional development. Comp Hydro addresses the need for 1) teaching materials for using data and physical models of hydrological phenomena, 2) building teachers' and students' comfort or familiarity with data analysis and modeling, and 3) infusing the computational knowledge and practices necessary to model and visualize hydrologic processes into instruction. Comp Hydro teams in Baltimore, MD and Fort Collins, CO are integrating teaching about surface water systems into high school courses focusing on flooding (MD) and surface water reservoirs (CO). This interactive session will highlight the successes and challenges of our physical and simulation models in helping teachers and students develop proficiency with computational thinking about surface water. We also will share insights from comparing teacher-led vs. project-led development of curriculum and our simulations.

  20. Standard anatomical and visual space for the mouse retina: computational reconstruction and transformation of flattened retinae with the Retistruct package.

    Directory of Open Access Journals (Sweden)

    David C Sterratt

    Full Text Available The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis. The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches

  1. Computational Modeling of Cephalad Fluid Shift for Application to Microgravity-Induced Visual Impairment

    Science.gov (United States)

    Nelson, Emily S.; Best, Lauren M.; Myers, Jerry G.; Mulugeta, Lealem

    2013-01-01

    An improved understanding of spaceflight-induced ocular pathology, including the loss of visual acuity, globe flattening, optic disk edema and distension of the optic nerve and optic nerve sheath, is of keen interest to space medicine. Cephalad fluid shift causes a profoundly altered distribution of fluid within the compartments of the head and body, and may indirectly generate phenomena that are biomechanically relevant to visual function, such as choroidal engorgement, compromised drainage of blood and cerebrospinal fluid (CSF), and altered translaminar pressure gradient posterior to the eye. The experimental body of evidence with respect to the consequences of fluid shift has not yet been able to provide a definitive picture of the sequence of events. On earth, elevated intracranial pressure (ICP) is associated with idiopathic intracranial hypertension (IIH), which can produce ocular pathologies that look similar to those seen in some astronauts returning from long-duration flight. However, the clinically observable features of the Visual Impairment and Intracranial Pressure (VIIP) syndrome in space and IIH on earth are not entirely consistent. Moreover, there are at present no experimental measurements of ICP in microgravity. By its very nature, physiological measurements in spaceflight are sparse, and the space environment does not lend itself to well-controlled experiments. In the absence of such data, numerical modeling can play a role in the investigation of biomechanical causal pathways that are suspected of involvement in VIIP. In this work, we describe the conceptual framework for modeling the altered compartmental fluid distribution that represents an equilibrium fluid distribution resulting from the loss of hydrostatic pressure gradient.

  2. Symmetry structure in discrete models of biochemical systems: natural subsystems and the weak control hierarchy in a new model of computation driven by interactions.

    Science.gov (United States)

    Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J

    2015-07-28

    Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.

  3. Computers, visualization, and history how new technology will transform our understanding of the past

    CERN Document Server

    Staley, David J

    2015-01-01

    This visionary and thoroughly accessible book examines how digital environments and virtual reality have altered the ways historians think and communicate ideas and how the new language of visualization transforms our understanding of the past. Drawing on familiar graphic models--maps, flow charts, museum displays, films--the author shows how images can often convey ideas and information more efficiently and accurately than words. With emerging digital technology, these images will become more sophisticated, manipulable, and multidimensional, and provide historians with new tools and environme

  4. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  5. Fully Online Multicommand Brain-Computer Interface with Visual Neurofeedback Using SSVEP Paradigm

    Directory of Open Access Journals (Sweden)

    Pablo Martinez

    2007-01-01

    Full Text Available We propose a new multistage procedure for a real-time brain-machine/computer interface (BCI. The developed system allows a BCI user to navigate a small car (or any other object on the computer screen in real time, in any of the four directions, and to stop it if necessary. Extensive experiments with five young healthy subjects confirmed the high performance of the proposed online BCI system. The modular structure, high speed, and the optimal frequency band characteristics of the BCI platform are features which allow an extension to a substantially higher number of commands in the near future.

  6. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    Science.gov (United States)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  7. Visual Perspectives within Educational Computer Games: Effects on Presence and Flow within Virtual Immersive Learning Environments

    Science.gov (United States)

    Scoresby, Jon; Shelton, Brett E.

    2011-01-01

    The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…

  8. Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization

    Science.gov (United States)

    Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…

  9. Using visualizations to support collaboration and coordination during computer-supported collaborative learning

    NARCIS (Netherlands)

    Janssen, J.J.H.M.

    2008-01-01

    This thesis addresses the topic of computer-supported collaborative learning (CSCL in short). In a CSCL-environment, students work in small groups on complex and challenging tasks. Although the teacher guides this process at a distance, students have to regulate and monitor their own learning

  10. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    Science.gov (United States)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  11. Educational Impact of Digital Visualization Tools on Digital Character Production Computer Science Courses

    Science.gov (United States)

    van Langeveld, Mark Christensen

    2009-01-01

    Digital character production courses have traditionally been taught in art departments. The digital character production course at the University of Utah is centered, drawing uniformly from art and engineering disciplines. Its design has evolved to include a synergy of computer science, functional art and human anatomy. It gives students an…

  12. DIGGING DEEPER INTO DEEP DATA: MOLECULAR DOCKING AS A HYPOTHESIS-DRIVEN BIOPHYSICAL INTERROGATION SYSTEM IN COMPUTATIONAL TOXICOLOGY.

    Science.gov (United States)

    Developing and evaluating prediactive strategies to elucidate the mode of biological activity of environmental chemicals is a major objective of the concerted efforts of the US-EPA's computational toxicology program.

  13. USL NASA/RECON project presentations at the 1985 ACM Computer Science Conference: Abstracts and visuals

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros

    1985-01-01

    This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.

  14. Vortex filament method as a tool for computational visualization of quantum turbulence

    Science.gov (United States)

    Hänninen, Risto; Baggaley, Andrew W.

    2014-01-01

    The vortex filament model has become a standard and powerful tool to visualize the motion of quantized vortices in helium superfluids. In this article, we present an overview of the method and highlight its impact in aiding our understanding of quantum turbulence, particularly superfluid helium. We present an analysis of the structure and arrangement of quantized vortices. Our results are in agreement with previous studies showing that under certain conditions, vortices form coherent bundles, which allows for classical vortex stretching, giving quantum turbulence a classical nature. We also offer an explanation for the differences between the observed properties of counterflow and pure superflow turbulence in a pipe. Finally, we suggest a mechanism for the generation of coherent structures in the presence of normal fluid shear. PMID:24704873

  15. (C)overt attention and visual speller design in an ERP-based brain-computer interface.

    Science.gov (United States)

    Treder, Matthias S; Blankertz, Benjamin

    2010-05-28

    In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the

  16. Mobile computation offloading architecture for mobile augmented reality, case study: Visualization of cetacean skeleton

    OpenAIRE

    Belen G. Rodriguez-Santana; Amilcar Meneses Viveros; Blanca Esther Carvajal-Gamez; Diana Carolina Trejo-Osorio

    2016-01-01

    Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on device...

  17. The Analysis of Visual Motion: From Computational Theory to Neuronal Mechanisms.

    Science.gov (United States)

    1986-12-01

    N00014-85-C-0038 9. PERFORMING ORGA4IXATIOM NAME AMC ADDRESS 10. PROGRAM ELE"WNT. PROJECT, TASK Artificial Inteligence Laboratory AREA & WORK UNIT...7 -AIM 318 THE ANALYSIS OF YISURL NOTION: FROM COMPUTATIONAL 1/1 ITHEORY TO MEUPRONAL MECH . U) MASSRCHUSETTS INST OF TECH I CEISRIDGE ARTIFICIAL ...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and cCENTER FOR BIOLOGICAL INFORMATION PROCESSING 0WHITAKER COLLEGE O A.I. Memo No. 919

  18. Usability Evaluation of Notebook Computers and Cellular Telephones Among Users with Visual and Upper Extremity Disabilities

    OpenAIRE

    Mooney, Aaron Michael

    2002-01-01

    Information appliances such as notebook computers and cellular telephones are becoming integral to the lives of many. These devices facilitate a variety of communication tasks, and are used for employment, education, and entertainment. Those with disabilities, however, have limited access to these devices, due in part to product designs that do not consider their special needs. A usability evaluation can help identify the needs and difficulties those with disabilities have when using a pro...

  19. Effectiveness of the use of question-driven levels of inquiry based instruction (QD-LOIBI) assisted visual multimedia supported teaching material on enhancing scientific explanation ability senior high school students

    Science.gov (United States)

    Suhandi, A.; Muslim; Samsudin, A.; Hermita, N.; Supriyatman

    2018-05-01

    In this study, the effectiveness of the use of Question-Driven Levels of Inquiry Based Instruction (QD-LOIBI) assisted visual multimedia supported teaching materials on enhancing senior high school students scientific explanation ability has been studied. QD-LOIBI was designed by following five-levels of inquiry proposed by Wenning. Visual multimedia used in teaching materials included image (photo), virtual simulation and video phenomena. QD-LOIBI assisted teaching materials supported by visual multimedia were tried out on senior high school students at one high school in one district in West Java. A quasi-experiment method with design one experiment group (n = 31) and one control group (n = 32) were used. Experimental group were given QD-LOIBI assisted teaching material supported by visual multimedia, whereas the control group were given QD-LOIBI assisted teaching materials not supported visual multimedia. Data on the ability of scientific explanation in both groups were collected by scientific explanation ability test in essay form concerning kinetic gas theory concept. The results showed that the number of students in the experimental class that has increased the category and quality of scientific explanation is greater than in the control class. These results indicate that the use of multimedia supported instructional materials developed for implementation of QD-LOIBI can improve students’ ability to provide explanations supported by scientific evidence gained from practicum activities and applicable concepts, laws, principles or theories.

  20. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    International Nuclear Information System (INIS)

    Naqvi, S

    2014-01-01

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physical principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as

  1. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Naqvi, S [Saint Agnes Cancer Institute, Department of Radiation Oncology, Baltimore, MD (United States)

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physical principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as

  2. Computer-controlled impalement of cells in retinal wholemounts visualized by infrared CCD imaging on an inverted microscope.

    Science.gov (United States)

    Reitsamer, H; Groiss, H P; Franz, M; Pflug, R

    2000-01-31

    We present a computer-guided microelectrode positioning system that is routinely used in our laboratory for intracellular electrophysiology and functional staining of retinal neurons. Wholemount preparations of isolated retina are kept in a superfusion chamber on the stage of an inverted microscope. Cells and layers of the retina are visualized by Nomarski interference contrast using infrared light in combination with a CCD camera system. After five-point calibration has been performed the electrode can be guided to any point inside the calibrated volume without moving the retina. Electrode deviations from target cells can be corrected by the software further improving the precision of this system. The good visibility of cells avoids prelabeling with fluorescent dyes and makes it possible to work under completely dark adapted conditions.

  3. A Study for Visual Realism of Designed Pictures on Computer Screens by Investigation and Brain-Wave Analyses.

    Science.gov (United States)

    Wang, Lan-Ting; Lee, Kun-Chou

    2016-08-01

    In this article, the visual realism of designed pictures on computer screens is studied by investigation and brain-wave analyses. The practical electroencephalogram (EEG) measurement is always time-varying and fluctuating so that conventional statistical techniques are not adequate for analyses. This study proposes a new scheme based on "fingerprinting" to analyze the EEG. Fingerprinting is a technique of probabilistic pattern recognition used in electrical engineering, very like the identification of human fingerprinting in a criminal investigation. The goal of this study was to assess whether subjective preference for pictures could be manifested physiologically by EEG fingerprinting analyses. The most important advantage of the fingerprinting technique is that it does not require accurate measurement. Instead, it uses probabilistic classification. Participants' preference for pictures can be assessed using fingerprinting analyses of physiological EEG measurements. © The Author(s) 2016.

  4. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

  5. A pilot trial of the iPad tablet computer as a portable device for visual acuity testing.

    Science.gov (United States)

    Zhang, Zhao-tian; Zhang, Shao-chong; Huang, Xiong-gao; Liang, Ling-yi

    2013-01-01

    We evaluated the accuracy of an app for the iPad tablet computer (Eye Chart Pro) as a portable method of visual acuity (VA) testing. A total of 120 consecutive patients (240 eyes) underwent visual acuity test with an iPad 2 and a conventional light-box chart. The logMAR VA results from the iPad were significantly higher than those from the light-box (P iPad chart and the light-box chart, with 95% limits of agreement of -0.14 to 0.19. Two groups of patients were defined: in Group 1 there were 182 eyes with VA better than 0.1 according to the light-box VA test. The median logMAR VA by the iPad was 0.54 and by the light-box chart it was 0.52; there was no significant difference between them (P = 0.69). In Group 2 there were 58 eyes with VA equal to or worse than 0.1 according to the light-box VA test. The median logMAR VA by the iPad was 1.26 and was 1.10 by the light box; the result from the iPad was significantly lower (P iPad is reliable for VA testing only when the Snellen VA is better than 0.1 (20/200).

  6. A combined brain-computer interface based on P300 potentials and motion-onset visual evoked potentials.

    Science.gov (United States)

    Jin, Jing; Allison, Brendan Z; Wang, Xingyu; Neuper, Christa

    2012-04-15

    Brain-computer interfaces (BCIs) allow users to communicate via brain activity alone. Many BCIs rely on the P300 and other event-related potentials (ERPs) that are elicited when target stimuli flash. Although there have been considerable research exploring ways to improve P300 BCIs, surprisingly little work has focused on new ways to change visual stimuli to elicit more recognizable ERPs. In this paper, we introduce a "combined" BCI based on P300 potentials and motion-onset visual evoked potentials (M-VEPs) and compare it with BCIs based on each simple approach (P300 and M-VEP). Offline data suggested that performance would be best in the combined paradigm. Online tests with adaptive BCIs confirmed that our combined approach is practical in an online BCI, and yielded better performance than the other two approaches (P<0.05) without annoying or overburdening the subject. The highest mean classification accuracy (96%) and practical bit rate (26.7bit/s) were obtained from the combined condition. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A Computer-Based Sustained Visual Attention Test for Pre-School Children: Design, Development and Psychometric Properties

    Directory of Open Access Journals (Sweden)

    Roohollah Zahedian Nasb

    2016-06-01

    Full Text Available Background: Sustained visual attention is a prerequisite for learning and memory. The early evaluation of attention in childhood is essential for their school and career success in the future. The aim of this study was to design, development and investigation of psychometric properties (content, face and convergent validity and test-retest and internal consistency reliability of the computer - based sustained visual attention test (SuVAT for healthy preschool children aged 4-6 with their special needs. Methods: This study was carried out in two stages: in the first stage computerbased SuVAT in two versions original and parallel were developed. Then the test-retest and internal consistency reliability using intra-class correlation and Cronbach’s alpha coefficients respectively were examined; Face validity was calculated through ideas gathering from 10 preschool children and content validity evaluated using CVI and CVR method and convergent validity of SuVAT with CPT was assessed using Pearson correlation. Results: The developed test showed a good content and faces validity, and also had excellent test-retest reliability. In addition, the assessment of internal consistency indicated the high internal consistency of the test (Cronbach’s alpha=0.869. SuVAT and CPT test demonstrated a positive correlation upon the convergent validity testing. Conclusion: SuVAT with good reliability and validity could be used as an acceptable sustained attention assessment in preschool children.

  8. 3D Nondestructive Visualization and Evaluation of TRISO Particles Distribution in HTGR Fuel Pebbles Using Cone-Beam Computed Tomography

    Directory of Open Access Journals (Sweden)

    Gongyi Yu

    2017-01-01

    Full Text Available A nonuniform distribution of tristructural isotropic (TRISO particles within a high-temperature gas-cooled reactor (HTGR pebble may lead to excessive thermal gradients and nonuniform thermal expansion during operation. If the particles are closely clustered, local hotspots may form, leading to excessive stresses on particle layers and an increased probability of particle failure. Although X-ray digital radiography (DR is currently used to evaluate the TRISO distributions in pebbles, X-ray DR projection images are two-dimensional in nature, which would potentially miss some details for 3D evaluation. This paper proposes a method of 3D visualization and evaluation of the TRISO distribution in HTGR pebbles using cone-beam computed tomography (CBCT: first, a pebble is scanned on our high-resolution CBCT, and 2D cross-sectional images are reconstructed; secondly, all cross-sectional images are restructured to form the 3D model of the pebble; then, volume rendering is applied to segment and display the TRISO particles in 3D for visualization and distribution evaluation. For method validation, several pebbles were scanned and the 3D distributions of the TRISO particles within the pebbles were produced. Experiment results show that the proposed method provides more 3D than DR, which will facilitate pebble fabrication research and production quality control.

  9. Automatic system for quantification and visualization of lung aeration on chest computed tomography images: the Lung Image System Analysis - LISA

    Energy Technology Data Exchange (ETDEWEB)

    Felix, John Hebert da Silva; Cortez, Paulo Cesar, E-mail: jhsfelix@gmail.co [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Dept. de Engenharia de Teleinformatica; Holanda, Marcelo Alcantara [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Hospital Universitario Walter Cantidio. Dept. de Medicina Clinica

    2010-12-15

    High Resolution Computed Tomography (HRCT) is the exam of choice for the diagnostic evaluation of lung parenchyma diseases. There is an increasing interest for computational systems able to automatically analyze the radiological densities of the lungs in CT images. The main objective of this study is to present a system for the automatic quantification and visualization of the lung aeration in HRCT images of different degrees of aeration, called Lung Image System Analysis (LISA). The secondary objective is to compare LISA to the Osiris system and also to specific algorithm lung segmentation (ALS), on the accuracy of the lungs segmentation. The LISA system automatically extracts the following image attributes: lungs perimeter, cross sectional area, volume, the radiological densities histograms, the mean lung density (MLD) in Hounsfield units (HU), the relative area of the lungs with voxels with density values lower than -950 HU (RA950) and the 15th percentile of the least density voxels (PERC15). Furthermore, LISA has a colored mask algorithm that applies pseudo-colors to the lung parenchyma according to the pre-defined radiological density chosen by the system user. The lungs segmentations of 102 images of 8 healthy volunteers and 141 images of 11 patients with Chronic Obstructive Pulmonary Disease (COPD) were compared on the accuracy and concordance among the three methods. The LISA was more effective on lungs segmentation than the other two methods. LISA's color mask tool improves the spatial visualization of the degrees of lung aeration and the various attributes of the image that can be extracted may help physicians and researchers to better assess lung aeration both quantitatively and qualitatively. LISA may have important clinical and research applications on the assessment of global and regional lung aeration and therefore deserves further developments and validation studies. (author)

  10. Automatic system for quantification and visualization of lung aeration on chest computed tomography images: the Lung Image System Analysis - LISA

    International Nuclear Information System (INIS)

    Felix, John Hebert da Silva; Cortez, Paulo Cesar; Holanda, Marcelo Alcantara

    2010-01-01

    High Resolution Computed Tomography (HRCT) is the exam of choice for the diagnostic evaluation of lung parenchyma diseases. There is an increasing interest for computational systems able to automatically analyze the radiological densities of the lungs in CT images. The main objective of this study is to present a system for the automatic quantification and visualization of the lung aeration in HRCT images of different degrees of aeration, called Lung Image System Analysis (LISA). The secondary objective is to compare LISA to the Osiris system and also to specific algorithm lung segmentation (ALS), on the accuracy of the lungs segmentation. The LISA system automatically extracts the following image attributes: lungs perimeter, cross sectional area, volume, the radiological densities histograms, the mean lung density (MLD) in Hounsfield units (HU), the relative area of the lungs with voxels with density values lower than -950 HU (RA950) and the 15th percentile of the least density voxels (PERC15). Furthermore, LISA has a colored mask algorithm that applies pseudo-colors to the lung parenchyma according to the pre-defined radiological density chosen by the system user. The lungs segmentations of 102 images of 8 healthy volunteers and 141 images of 11 patients with Chronic Obstructive Pulmonary Disease (COPD) were compared on the accuracy and concordance among the three methods. The LISA was more effective on lungs segmentation than the other two methods. LISA's color mask tool improves the spatial visualization of the degrees of lung aeration and the various attributes of the image that can be extracted may help physicians and researchers to better assess lung aeration both quantitatively and qualitatively. LISA may have important clinical and research applications on the assessment of global and regional lung aeration and therefore deserves further developments and validation studies. (author)

  11. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)

    2015-03-10

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.

  12. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    International Nuclear Information System (INIS)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing; Xu, Guang-Hua

    2015-01-01

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n n with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method

  13. Computed microtomography visualization and quantification of mouse ischemic brain lesion by nonionic radio contrast agents.

    Science.gov (United States)

    Dobrivojević, Marina; Bohaček, Ivan; Erjavec, Igor; Gorup, Dunja; Gajović, Srećko

    2013-02-01

    To explore the possibility of brain imaging by microcomputed tomography (microCT) using x-ray contrasting methods to visualize mouse brain ischemic lesions after middle cerebral artery occlusion (MCAO). Isolated brains were immersed in ionic or nonionic radio contrast agent (RCA) for 5 days and subsequently scanned using microCT scanner. To verify whether ex-vivo microCT brain images can be used to characterize ischemic lesions, they were compared to Nissl stained serial histological sections of the same brains. To verify if brains immersed in RCA may be used afterwards for other methods, subsequent immunofluorescent labeling with anti-NeuN was performed. Nonionic RCA showed better gray to white matter contrast in the brain, and therefore was selected for further studies. MicroCT measurement of ischemic lesion size and cerebral edema significantly correlated with the values determined by Nissl staining (ischemic lesion size: P=0.0005; cerebral edema: P=0.0002). Brain immersion in nonionic RCA did not affect subsequent immunofluorescent analysis and NeuN immunoreactivity. MicroCT method was proven to be suitable for delineation of the ischemic lesion from the non-infarcted tissue, and quantification of lesion volume and cerebral edema.

  14. Computer mapping and visualization of facilities for planning of D and D operations

    International Nuclear Information System (INIS)

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-01-01

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed

  15. More than one way to see it: Individual heuristics in avian visual computation.

    Science.gov (United States)

    Ravignani, Andrea; Westphal-Fitch, Gesche; Aust, Ulrike; Schlumpp, Martin M; Fitch, W Tecumseh

    2015-10-01

    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species' ability to process pattern classes or different species' performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds' choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Visualization of biomedical image data and irradiation planning using a parallel computing system

    International Nuclear Information System (INIS)

    Lehrig, R.

    1991-01-01

    The contribution explains the development of a novel, low-cost workstation for the processing of biomedical tomographic data sequences. The workstation was to allow both graphical display of the data and implementation of modelling software for irradiation planning, especially for calculation of dose distributions on the basis of the measured tomogram data. The system developed according to these criteria is a parallel computing system which performs secondary, two-dimensional image reconstructions irrespective of the imaging direction of the original tomographic scans. Three-dimensional image reconstructions can be generated from any direction of view, with random selection of sections of the scanned object. (orig./MM) With 69 figs., 2 tabs [de

  17. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  18. Visualizing the phenomena of wave interference, phase-shifting and polarization by interactive computer simulations

    Science.gov (United States)

    Rivera-Ortega, Uriel; Dirckx, Joris

    2015-09-01

    In this manuscript a computer based simulation is proposed for teaching concepts of interference of light (under the scheme of a Michelson interferometer), phase-shifting and polarization states. The user can change some parameters of the interfering waves, such as their amplitude and phase difference in order to graphically represent the polarization state of a simulated travelling wave. Regarding to the interference simulation, the user is able to change the wavelength and type of the interfering waves by selecting combinations between planar and Gaussian profiles, as well as the optical path difference by translating or tilting one of the two mirrors in the interferometer setup, all of this via a graphical user interface (GUI) designed in MATLAB. A theoretical introduction and simulation results for each phenomenon will be shown. Due to the simulation characteristics, this GUI can be a very good non-formal learning resource.

  19. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  20. A New Generation of Brain-Computer Interfaces Driven by Discovery of Latent EEG-fMRI Linkages Using Tensor Decomposition

    Directory of Open Access Journals (Sweden)

    Gopikrishna Deshpande

    2017-06-01

    Full Text Available A Brain-Computer Interface (BCI is a setup permitting the control of external devices by decoding brain activity. Electroencephalography (EEG has been extensively used for decoding brain activity since it is non-invasive, cheap, portable, and has high temporal resolution to allow real-time operation. Due to its poor spatial specificity, BCIs based on EEG can require extensive training and multiple trials to decode brain activity (consequently slowing down the operation of the BCI. On the other hand, BCIs based on functional magnetic resonance imaging (fMRI are more accurate owing to its superior spatial resolution and sensitivity to underlying neuronal processes which are functionally localized. However, due to its relatively low temporal resolution, high cost, and lack of portability, fMRI is unlikely to be used for routine BCI. We propose a new approach for transferring the capabilities of fMRI to EEG, which includes simultaneous EEG/fMRI sessions for finding a mapping from EEG to fMRI, followed by a BCI run from only EEG data, but driven by fMRI-like features obtained from the mapping identified previously. Our novel data-driven method is likely to discover latent linkages between electrical and hemodynamic signatures of neural activity hitherto unexplored using model-driven methods, and is likely to serve as a template for a novel multi-modal strategy wherein cross-modal EEG-fMRI interactions are exploited for the operation of a unimodal EEG system, leading to a new generation of EEG-based BCIs.

  1. A New Generation of Brain-Computer Interfaces Driven by Discovery of Latent EEG-fMRI Linkages Using Tensor Decomposition.

    Science.gov (United States)

    Deshpande, Gopikrishna; Rangaprakash, D; Oeding, Luke; Cichocki, Andrzej; Hu, Xiaoping P

    2017-01-01

    A Brain-Computer Interface (BCI) is a setup permitting the control of external devices by decoding brain activity. Electroencephalography (EEG) has been extensively used for decoding brain activity since it is non-invasive, cheap, portable, and has high temporal resolution to allow real-time operation. Due to its poor spatial specificity, BCIs based on EEG can require extensive training and multiple trials to decode brain activity (consequently slowing down the operation of the BCI). On the other hand, BCIs based on functional magnetic resonance imaging (fMRI) are more accurate owing to its superior spatial resolution and sensitivity to underlying neuronal processes which are functionally localized. However, due to its relatively low temporal resolution, high cost, and lack of portability, fMRI is unlikely to be used for routine BCI. We propose a new approach for transferring the capabilities of fMRI to EEG, which includes simultaneous EEG/fMRI sessions for finding a mapping from EEG to fMRI, followed by a BCI run from only EEG data, but driven by fMRI-like features obtained from the mapping identified previously. Our novel data-driven method is likely to discover latent linkages between electrical and hemodynamic signatures of neural activity hitherto unexplored using model-driven methods, and is likely to serve as a template for a novel multi-modal strategy wherein cross-modal EEG-fMRI interactions are exploited for the operation of a unimodal EEG system, leading to a new generation of EEG-based BCIs.

  2. Risk factors for computer visual syndrome (CVS) among operators of two call centers in São Paulo, Brazil.

    Science.gov (United States)

    Sa, Eduardo Costa; Ferreira Junior, Mario; Rocha, Lys Esther

    2012-01-01

    The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in São Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.

  3. MINERVE flood warning and management project. What is computed, what is required and what is visualized?

    Science.gov (United States)

    Garcia Hernandez, J.; Boillat, J.-L.; Schleiss, A.

    2010-09-01

    During last decades several flood events caused important inundations in the Upper Rhone River basin in Switzerland. As a response to such disasters, the MINERVE project aims to improve the security by reducing damages in this basin. The main goal of this project is to predict floods in advance in order to obtain a better flow control during flood peaks taking advantage from the multireservoir system of the existing hydropower schemes. The MINERVE system evaluates the hydro-meteorological situation on the watershed and provides hydrological forecasts with a horizon from three to five days. It exploits flow measurements, data from reservoirs and hydropower plants as well as deterministic (COSMO-7 and COSMO-2) and ensemble (COSMO-LEPS) meteorological forecast from MeteoSwiss. The hydrological model is based on a semi-distributed concept, dividing the watershed in 239 sub-catchments, themselves decomposed in elevation bands in order to describe the temperature-driven processes related to snow and glacier melt. The model is completed by rivers and hydraulic works such as water intakes, reservoirs, turbines and pumps. Once the hydrological forecasts are calculated, a report provides the warning level at selected control points according to time, being a support to decision-making for preventive actions. A Notice, Alert or Alarm is then activated depending on the discharge thresholds defined by the Valais Canton. Preventive operation scenarios are then generated based on observed discharge at control points, meteorological forecasts from MeteoSwiss, hydrological forecasts from MINERVE and retention possibilities in the reservoirs. An update of the situation is done every time new data or new forecasts are provided, keeping last observations and last forecasts in the warning report. The forecasts can also be used for the evaluation of priority decisions concerning the management of hydropower plants for security purposes. Considering future inflows and reservoir levels

  4. Visual cognition

    Energy Technology Data Exchange (ETDEWEB)

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  5. Computer aided vertebral visualization and analysis: a methodology using the sand rat, a small animal model of disc degeneration

    Directory of Open Access Journals (Sweden)

    Hanley Edward N

    2003-03-01

    Full Text Available Abstract Background The purpose of this study is to present an automated system that analyzes digitized x-ray images of small animal spines identifying the effects of disc degeneration. The age-related disc and spine degeneration that occurs in the sand rat (Psammomys obesus has previously been documented radiologically; selected representative radiographs with age-related changes were used here to develop computer-assisted vertebral visualization/analysis techniques. Techniques presented here have the potential to produce quantitative algorithms that create more accurate and informative measurements in a time efficient manner. Methods Signal and image processing techniques were applied to digitized spine x-ray images the spine was segmented, and orientation and curvature determined. The image was segmented based on orientation changes of the spine; edge detection was performed to define vertebral boundaries. Once vertebrae were identified, a number of measures were introduced and calculated to retrieve information on the vertebral separation/orientation and sclerosis. Results A method is described which produces computer-generated quantitative measurements of vertebrae and disc spaces. Six sand rat spine radiographs illustrate applications of this technique. Results showed that this method can successfully automate calculation and analysis of vertebral length, vertebral spacing, vertebral angle, and can score sclerosis. Techniques also provide quantitative means to explore the relation between age and vertebral shape. Conclusions This method provides a computationally efficient system to analyze spinal changes during aging. Techniques can be used to automate the quantitative processing of vertebral radiographic images and may be applicable to human and other animal radiologic models of the aging/degenerating spine.

  6. Fluorescent x-ray computed tomography to visualize specific material distribution

    Science.gov (United States)

    Takeda, Tohoru; Yuasa, Tetsuya; Hoshino, Atsunori; Akiba, Masahiro; Uchida, Akira; Kazama, Masahiro; Hyodo, Kazuyuki; Dilmanian, F. Avraham; Akatsuka, Takao; Itai, Yuji

    1997-10-01

    Fluorescent x-ray computed tomography (FXCT) is being developed to detect non-radioactive contrast materials in living specimens. The FXCT systems consists of a silicon channel cut monochromator, an x-ray slit and a collimator for detection, a scanning table for the target organ and an x-ray detector for fluorescent x-ray and transmission x-ray. To reduce Compton scattering overlapped on the K(alpha) line, incident monochromatic x-ray was set at 37 keV. At 37 keV Monte Carlo simulation showed almost complete separation between Compton scattering and the K(alpha) line. Actual experiments revealed small contamination of Compton scattering on the K(alpha) line. A clear FXCT image of a phantom was obtained. Using this system the minimal detectable dose of iodine was 30 ng in a volume of 1 mm3, and a linear relationship was demonstrated between photon counts of fluorescent x-rays and the concentration of iodine contrast material. The use of high incident x-ray energy allows an increase in the signal to noise ratio by reducing the Compton scattering on the K(alpha) line.

  7. An alternating direction algorithm for two-phase flow visualization using gamma computed tomography.

    Science.gov (United States)

    Xue, Qian; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-12-01

    In order to build high-speed imaging systems with low cost and low radiation leakage, the number of radioactive sources and detectors in the multiphase flow computed tomography (CT) system has to be limited. Moreover, systematic and random errors are inevitable in practical applications. The limited and corrupted measurement data have made the tomographic inversion process the most critical part in multiphase flow CT. Although various iterative reconstruction algorithms have been developed based on least squares minimization, the imaging quality is still inadequate for the reconstruction of relatively complicated bubble flow. This paper extends an alternating direction method (ADM), which is originally proposed in compressed sensing, to image two-phase flow using a low-energy γ-CT system. An l(1) norm-based regularization technique is utilized to treat the ill-posedness of the inverse problem, and the image reconstruction model is reformulated into one having partially separable objective functions, thereafter a dual-based ADM is adopted to solve the resulting problem. The feasibility is demonstrated in prototype experiments. Comparisons between the ADM and the conventional iterative algorithms show that the former has obviously improved the space resolution in reasonable time.

  8. Follow 1.1 - a program for visualization of Thermal-Hydraulic computer simulations. User's manual

    International Nuclear Information System (INIS)

    Hyvarinen, J.

    1990-04-01

    FOLLOW is a computer program designed to function as an analyst's aid when performing large thermalhydraulic and related safety calculations using the well known simulation codes RELAP5, MELCOR, SMABRE and TRAB. The code is a by-product of the effort to improve the analysis capabilities of the Finnish Centre for Radiation and Nuclear Safety (STUK). FOLLOW's most important application is as an on-line 'window' into the progress of the simulation calculation. The thermal-hydraulic analyses related to nuclear safety routinely require very long calculation times. FOLLOW provides a possibility to follow the course of the simulation and thus make observations of the results already during the simulation. FOLLOW's various outputs have been designed to mimic those available at nuclear power plant operators' console. Thus FOLLOW can also be used much like a nuclear power plant simulator. This manual describes the usages, features and input requirements of FOLLOW version 1.1, including a sample problem input and various outputs. (orig.)

  9. Data-driven storytelling

    CERN Document Server

    Hurter, Christophe; Diakopoulos, Nicholas ed.; Carpendale, Sheelagh

    2018-01-01

    This book is an accessible introduction to data-driven storytelling, resulting from discussions between data visualization researchers and data journalists. This book will be the first to define the topic, present compelling examples and existing resources, as well as identify challenges and new opportunities for research.

  10. Computational fluid dynamics simulation of wind-driven inter-unit dispersion around multi-storey buildings: Upstream building effect

    DEFF Research Database (Denmark)

    Ai, Zhengtao; Mak, C.M.; Dai, Y.W.

    2017-01-01

    of such changed airflow patterns on inter-unit dispersion characteristics around a multi-storey building due to wind effect. Computational fluid dynamics (CFD) method in the framework of Reynolds-averaged Navier-stokes modelling was employed to predict the coupled outdoor and indoor airflow field, and the tracer...... gas technique was used to simulate the dispersion of infectious agents between units. Based on the predicted concentration field, a mass conservation based parameter, namely re-entry ratio, was used to evaluate quantitatively the inter-unit dispersion possibilities and thus assess risks along...

  11. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    Science.gov (United States)

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; phistogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  13. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  14. On the future of 3-D visualization in non-medical industrial x-ray computed tomography

    International Nuclear Information System (INIS)

    Wells, J.M.

    2004-01-01

    The purpose of imaging is to capture and record the details of an object for both current and future analysis in a transportable and archival format. Generally, the development and understanding of the relationships of the features of interest thus revealed in the image is ultimately essential for the beneficial utilization of that that knowledge. Modern advanced imaging methods utilized in both medical and industrial applications are predominantly of a digital format, and increasingly moving from a 2-D to 3-D modality to allow for significantly improved detail resolution and clarity of volumetric visualization. Conventional digital radiography (DR), for example, compresses an entire object volume onto a 2-D planar image with consequent lack of spatial resolution and considerable loss of small volume feature resolution. Computed tomography (CT) overcomes both of these limitations, providing the highly desirable capability of precise 3-D detection, localization and characterization of multiple features throughout the subject object volume. CT has the further capability to reconstruct virtual 3-D solid object images with arbitrary and reversible planar sectioning and of variable transparency to clearly visualize features of different densities in situ within an otherwise opaque object. While tomographic imaging is utilized in various medical CT, MRI, PET, EBCT and 3-D Ultrasound modalities, only the X-ray CT imaging is briefly discussed here as it presents comparable high quality images and is quite similar and synergistic with industrial XCT. Medical CT procedures started in the late 1970's (originally known as CAT Scan) and have progressed to the extent of being experienced and accepted by much of the general population. Non-Medical CT (or Industrial XCT) technology has historically followed in the shadow of Medical CT but remains today considerably less pervasive. There are however increasingly several important equipment and application distinctions. These will

  15. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game.

    Science.gov (United States)

    Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.

  16. Barriers to the Use of Computer Assistive Technology among Students with Visual Impairment in Ghana: The Case of Akropong School for the Blind

    Science.gov (United States)

    Ampratwum, Joseph; Offei, Yaw Nyadu; Ntoaduro, Afua

    2016-01-01

    The study aimed at exploring barriers to the use of computer assistive technology among students with visual impairment at Akropong School for the Blind. A case study design was adopted and the purposive sampling technique used to select 35 participants for the study. The researchers gathered qualitative data using an in-depth interview guide to…

  17. Use and Effectiveness of a Video- and Text-Driven Web-Based Computer-Tailored Intervention: Randomized Controlled Trial.

    Science.gov (United States)

    Walthouwer, Michel Jean Louis; Oenema, Anke; Lechner, Lilian; de Vries, Hein

    2015-09-25

    Many Web-based computer-tailored interventions are characterized by high dropout rates, which limit their potential impact. This study had 4 aims: (1) examining if the use of a Web-based computer-tailored obesity prevention intervention can be increased by using videos as the delivery format, (2) examining if the delivery of intervention content via participants' preferred delivery format can increase intervention use, (3) examining if intervention effects are moderated by intervention use and matching or mismatching intervention delivery format preference, (4) and identifying which sociodemographic factors and intervention appreciation variables predict intervention use. Data were used from a randomized controlled study into the efficacy of a video and text version of a Web-based computer-tailored obesity prevention intervention consisting of a baseline measurement and a 6-month follow-up measurement. The intervention consisted of 6 weekly sessions and could be used for 3 months. ANCOVAs were conducted to assess differences in use between the video and text version and between participants allocated to a matching and mismatching intervention delivery format. Potential moderation by intervention use and matching/mismatching delivery format on self-reported body mass index (BMI), physical activity, and energy intake was examined using regression analyses with interaction terms. Finally, regression analysis was performed to assess determinants of intervention use. In total, 1419 participants completed the baseline questionnaire (follow-up response=71.53%, 1015/1419). Intervention use declined rapidly over time; the first 2 intervention sessions were completed by approximately half of the participants and only 10.9% (104/956) of the study population completed all 6 sessions of the intervention. There were no significant differences in use between the video and text version. Intervention use was significantly higher among participants who were allocated to an

  18. Visually impaired researchers get their hands on quantum chemistry: application to a computational study on the isomerization of a sterol

    Science.gov (United States)

    Lounnas, Valère; Wedler, Henry B.; Newman, Timothy; Schaftenaar, Gijs; Harrison, Jason G.; Nepomuceno, Gabriella; Pemberton, Ryan; Tantillo, Dean J.; Vriend, Gert

    2014-11-01

    In molecular sciences, articles tend to revolve around 2D representations of 3D molecules, and sighted scientists often resort to 3D virtual reality software to study these molecules in detail. Blind and visually impaired (BVI) molecular scientists have access to a series of audio devices that can help them read the text in articles and work with computers. Reading articles published in this journal, though, is nearly impossible for them because they need to generate mental 3D images of molecules, but the article-reading software cannot do that for them. We have previously designed AsteriX, a web server that fully automatically decomposes articles, detects 2D plots of low molecular weight molecules, removes meta data and annotations from these plots, and converts them into 3D atomic coordinates. AsteriX-BVI goes one step further and converts the 3D representation into a 3D printable, haptic-enhanced format that includes Braille annotations. These Braille-annotated physical 3D models allow BVI scientists to generate a complete mental model of the molecule. AsteriX-BVI uses Molden to convert the meta data of quantum chemistry experiments into BVI friendly formats so that the entire line of scientific information that sighted people take for granted—from published articles, via printed results of computational chemistry experiments, to 3D models—is now available to BVI scientists too. The possibilities offered by AsteriX-BVI are illustrated by a project on the isomerization of a sterol, executed by the blind co-author of this article (HBW).

  19. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  20. Examining sensory ability, feature matching and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential.

    Science.gov (United States)

    Brumberg, Jonathan S; Nguyen, Anh; Pitt, Kevin M; Lorenz, Sean D

    2018-01-31

    We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access

  1. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    Science.gov (United States)

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Sequence detection analysis based on canonical correlation for steady-state visual evoked potential brain computer interfaces.

    Science.gov (United States)

    Cao, Lei; Ju, Zhengyu; Li, Jie; Jian, Rongjun; Jiang, Changjun

    2015-09-30

    Steady-state visual evoked potential (SSVEP) has been widely applied to develop brain computer interface (BCI) systems. The essence of SSVEP recognition is to recognize the frequency component of target stimulus focused by a subject significantly present in EEG spectrum. In this paper, a novel statistical approach based on sequence detection (SD) is proposed for improving the performance of SSVEP recognition. This method uses canonical correlation analysis (CCA) coefficients to observe SSVEP signal sequence. And then, a threshold strategy is utilized for SSVEP recognition. The result showed the classification performance with the longer duration of time window achieved the higher accuracy for most subjects. And the average time costing per trial was lower than the predefined recognition time. It was implicated that our approach could improve the speed of BCI system in contrast to other methods. Comparison with existing method(s): In comparison with other resultful algorithms, experimental accuracy of SD approach was better than those using a widely used CCA-based method and two newly proposed algorithms, least absolute shrinkage and selection operator (LASSO) recognition model as well as multivariate synchronization index (MSI) method. Furthermore, the information transfer rate (ITR) obtained by SD approach was higher than those using other three methods for most participants. These conclusions demonstrated that our proposed method was promising for a high-speed online BCI. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Three-dimensional computational fluid dynamics analysis of buoyancy-driven natural ventilation and entropy generation in a prismatic greenhouse

    Directory of Open Access Journals (Sweden)

    Aich Walid

    2018-01-01

    Full Text Available A computational analysis of the natural ventilation process and entropy generation in 3-D prismatic greenhouse was performed using CFD. The aim of the study is to investigate how buoyancy forces influence air-flow and temperature patterns inside the greenhouse having lower level opening in its right heated façade and also upper level opening near the roof top in the opposite cooled façade. The bot-tom and all other walls are assumed to be perfect thermal insulators. Rayleigh number is the main parameter which changes from 103 to 106 and Prandtl number is fixed at Pr = 0.71. Results are reported in terms of particles trajectories, iso-surfaces of temperature, mean Nusselt number, and entropy generation. It has been found that the flow structure is sensitive to the value of Rayleigh number and that heat transfer increases with increasing this parameter. Also, it have been noticed that, using asymmetric opening positions improve the natural ventilation and facilitate the occurrence of buoyancy induced upward cross air-flow (low-level supply and upper-level extraction inside the greenhouse.

  4. A Steady-State Visual Evoked Potential Brain-Computer Interface System Evaluation as an In-Vehicle Warning Device

    Science.gov (United States)

    Riyahi, Pouria

    This thesis is part of current research at Center for Intelligence Systems Research (CISR) at The George Washington University for developing new in-vehicle warning systems via Brain-Computer Interfaces (BCIs). The purpose of conducting this research is to contribute to the current gap between BCI and in-vehicle safety studies. It is based on the premise that accurate and timely monitoring of human (driver) brain's signal to external stimuli could significantly aide in detection of driver's intentions and development of effective warning systems. The thesis starts with introducing the concept of BCI and its development history while it provides a literature review on the nature of brain signals. The current advancement and increasing demand for commercial and non-medical BCI products are described. In addition, the recent research attempts in transportation safety to study drivers' behavior or responses through brain signals are reviewed. The safety studies, which are focused on employing a reliable and practical BCI system as an in-vehicle assistive device, are also introduced. A major focus of this thesis research has been on the evaluation and development of the signal processing algorithms which can effectively filter and process brain signals when the human subject is subjected to Visual LED (Light Emitting Diodes) stimuli at different frequencies. The stimulated brain generates a voltage potential, referred to as Steady-State Visual Evoked Potential (SSVEP). Therefore, a newly modified analysis algorithm for detecting the brain visual signals is proposed. These algorithms are designed to reach a satisfactory accuracy rate without preliminary trainings, hence focusing on eliminating the need for lengthy training of human subjects. Another important concern is the ability of the algorithms to find correlation of brain signals with external visual stimuli in real-time. The developed analysis models are based on algorithms which are capable of generating results

  5. X-ray phase-contrast computed tomography visualizes the microstructure and degradation profile of implanted biodegradable scaffolds after spinal cord injury

    Energy Technology Data Exchange (ETDEWEB)

    Takashima, Kenta, E-mail: takashima-k@med.tohoku.ac.jp [Tohoku University Graduate School of Medicine, Sendai (Japan); University of Tokyo, Tokyo (Japan); Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto [SPring-8, Hyogo (Japan); Matsuda, Shojiro [Gunze Limited, Shiga (Japan); Nakahira, Atsushi [Osaka Prefecture University, Osaka (Japan); Osumi, Noriko; Kohzuki, Masahiro [Tohoku University Graduate School of Medicine, Sendai (Japan); Onodera, Hiroshi [University of Tokyo, Tokyo (Japan)

    2015-01-01

    X-ray phase-contrast computed tomography imaging based on the Talbot grating interferometer is described, and the way it can visualize the polyglycolic acid scaffold, including its microfibres, after implantation into the injured spinal cord is shown. Tissue engineering strategies for spinal cord repair are a primary focus of translational medicine after spinal cord injury (SCI). Many tissue engineering strategies employ three-dimensional scaffolds, which are made of biodegradable materials and have microstructure incorporated with viable cells and bioactive molecules to promote new tissue generation and functional recovery after SCI. It is therefore important to develop an imaging system that visualizes both the microstructure of three-dimensional scaffolds and their degradation process after SCI. Here, X-ray phase-contrast computed tomography imaging based on the Talbot grating interferometer is described and it is shown how it can visualize the polyglycolic acid scaffold, including its microfibres, after implantation into the injured spinal cord. Furthermore, X-ray phase-contrast computed tomography images revealed that degradation occurred from the end to the centre of the braided scaffold in the 28 days after implantation into the injured spinal cord. The present report provides the first demonstration of an imaging technique that visualizes both the microstructure and degradation of biodegradable scaffolds in SCI research. X-ray phase-contrast imaging based on the Talbot grating interferometer is a versatile technique that can be used for a broad range of preclinical applications in tissue engineering strategies.

  6. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  7. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  8. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  9. Evaluation of visual and computer-based CT analysis for the identification of functional patterns of obstruction and restriction in hypersensitivity pneumonitis.

    Science.gov (United States)

    Jacob, Joseph; Bartholmai, Brian J; Brun, Anne Laure; Egashira, Ryoko; Rajagopalan, Srinivasan; Karwoski, Ronald; Kouranos, Vasileios; Kokosi, Maria; Hansell, David M; Wells, Athol U

    2017-11-01

    To determine whether computer-based quantification (CALIPER software) is superior to visual computed tomography (CT) scoring in the identification of CT patterns indicative of restrictive and obstructive functional indices in hypersensitivity pneumonitis (HP). A total of 135 consecutive HP patients had CT parenchymal patterns evaluated quantitatively by both visual scoring and CALIPER. Results were evaluated against: forced vital capacity (FVC), total lung capacity (TLC), diffusing capacity for carbon monoxide (DL CO ) and a composite physiological index (CPI) to identify which CT scoring method better correlated with functional indices. CALIPER-derived scores of total interstitial lung disease extent correlated more strongly than visual scores: FVC (CALIPER R = 0.73, visual R = 0.51); DL CO (CALIPER R = 0.61, visual R = 0.48); and CPI (CALIPER R = 0·70, visual R = 0·55). The CT variable that correlated most strongly with restrictive functional indices was CALIPER pulmonary vessel volume (PVV): FVC R = 0.75, DL CO R = 0.68 and CPI R = 0.76. Ground-glass opacity quantified by CALIPER alone demonstrated strong associations with restrictive functional indices: CALIPER FVC R = 0.65; DL CO R = 0.59; CPI R = 0.64; and visual = not significant. Decreased attenuation lung quantified by CALIPER was a better morphological measure of obstructive lung disease than equivalent visual scores as judged by relationships with TLC (CALIPER R = 0.63 and visual R = 0.12). All results were maintained on multivariate analysis. CALIPER improved on visual scoring in HP as judged by restrictive and obstructive functional correlations. Decreased attenuation regions of the lung quantified by CALIPER demonstrated better linkages to obstructive lung physiology than visually quantified CT scores. A novel CALIPER variable, the PVV, demonstrated the strongest linkages with restrictive functional indices and could represent a new

  10. Teach Yourself VISUALLY iPad

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    An ideal, visual guide for the image-driven iPad. Whether your interests veer towards movies, games, books, or music—the iPad is the computing device for dazzling graphics, crisp and clear audio, and effortless portability. If ever there existed a device that demanded a reading companion for the visual learner, it's the iPad—and this resource is perfectly suited for the visual audience. Veteran VISUAL author Lonzell Watson walks you through all the features unique to the iPad and shows you how to download books, apps, music, and video content, as well as send photos and e-mails. Plus, you'll d

  11. When and why might a Computer Aided Detection (CAD) system interfere with visual search? An eye-tracking study

    Science.gov (United States)

    Drew, Trafton; Cunningham, Corbin; Wolfe, Jeremy

    2012-01-01

    Rational and Objectives Computer Aided Detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods 47 naïve observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye-movements. For some observers, a CAD system marked 75% of targets and 10% of distractors while other observers completed the study without CAD. In Experiment 1, the CAD system’s primary function was to tell observers where the target might be. In Experiment 2, CAD provided information about target identity. Results In Experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22)=4.74, pCAD system were missed more frequently than equivalent targets in No CAD blocks of the experiment (t(22)=7.02, pCAD, but also no significant cost on sensitivity to unmarked targets (t(22)=0.6, p=n.s.). Finally, in both experiments, CAD produced reliable changes in eye-movements: CAD observers examined a lower total percentage of the search area than the No CAD observers (Ex 1: t(48)=3.05, pCAD signals do not combine with observers’ unaided performance in a straight-forward manner. CAD can engender a sense of certainty that can lead to incomplete search and elevated chances of missing unmarked stimuli. PMID:22958720

  12. TH-CD-206-12: Image-Based Motion Estimation for Plaque Visualization in Coronary Computed Tomography Angiography

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X; Sisniega, A; Zbijewski, W; Stayman, J [Johns Hopkins University, Balitmore, MD (United States); Contijoch, F; McVeigh, E [University of California, San Diego, San Diego, CA (United States)

    2016-06-15

    Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected as the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.

  13. An automated and fast approach to detect single-trial visual evoked potentials with application to brain-computer interface.

    Science.gov (United States)

    Tu, Yiheng; Hung, Yeung Sam; Hu, Li; Huang, Gan; Hu, Yong; Zhang, Zhiguo

    2014-12-01

    This study aims (1) to develop an automated and fast approach for detecting visual evoked potentials (VEPs) in single trials and (2) to apply the single-trial VEP detection approach in designing a real-time and high-performance brain-computer interface (BCI) system. The single-trial VEP detection approach uses common spatial pattern (CSP) as a spatial filter and wavelet filtering (WF) a temporal-spectral filter to jointly enhance the signal-to-noise ratio (SNR) of single-trial VEPs. The performance of the joint spatial-temporal-spectral filtering approach was assessed in a four-command VEP-based BCI system. The offline classification accuracy of the BCI system was significantly improved from 67.6±12.5% (raw data) to 97.3±2.1% (data filtered by CSP and WF). The proposed approach was successfully implemented in an online BCI system, where subjects could make 20 decisions in one minute with classification accuracy of 90%. The proposed single-trial detection approach is able to obtain robust and reliable VEP waveform in an automatic and fast way and it is applicable in VEP based online BCI systems. This approach provides a real-time and automated solution for single-trial detection of evoked potentials or event-related potentials (EPs/ERPs) in various paradigms, which could benefit many applications such as BCI and intraoperative monitoring. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Internal structures of scaffold-free 3D cell cultures visualized by synchrotron radiation-based micro-computed tomography

    Science.gov (United States)

    Saldamli, Belma; Herzen, Julia; Beckmann, Felix; Tübel, Jutta; Schauwecker, Johannes; Burgkart, Rainer; Jürgens, Philipp; Zeilhofer, Hans-Florian; Sader, Robert; Müller, Bert

    2008-08-01

    Recently the importance of the third dimension in cell biology has been better understood, resulting in a re-orientation towards three-dimensional (3D) cultivation. Yet adequate tools for their morphological characterization have to be established. Synchrotron radiation-based micro computed tomography (SRμCT) allows visualizing such biological systems with almost isotropic micrometer resolution, non-destructively. We have applied SRμCT for studying the internal morphology of human osteoblast-derived, scaffold-free 3D cultures, termed histoids. Primary human osteoblasts, isolated from femoral neck spongy bone, were grown as 2D culture in non-mineralizing osteogenic medium until a rather thick, multi-cellular membrane was formed. This delicate system was intentionally released to randomly fold itself. The folded cell cultures were grown to histoids of cubic milli- or centimeter size in various combinations of mineralizing and non-mineralizing osteogenic medium for a total period of minimum 56 weeks. The SRμCT-measurements were performed in the absorption contrast mode at the beamlines BW 2 and W 2 (HASYLAB at DESY, Hamburg, Germany), operated by the GKSS-Research Center. To investigate the entire volume of interest several scans were performed under identical conditions and registered to obtain one single dataset of each sample. The histoids grown under different conditions exhibit similar external morphology of globular or ovoid shape. The SRμCT-examination revealed the distinctly different morphological structures inside the histoids. One obtains details of the histoids that permit to identify and select the most promising slices for subsequent histological characterization.

  15. An Evaluation of a Computer-Based Training on the Visual Analysis of Single-Subject Data

    Science.gov (United States)

    Snyder, Katie

    2013-01-01

    Visual analysis is the primary method of analyzing data in single-subject methodology, which is the predominant research method used in the fields of applied behavior analysis and special education. Previous research on the reliability of visual analysis suggests that judges often disagree about what constitutes an intervention effect. Considering…

  16. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    Science.gov (United States)

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  17. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    Directory of Open Access Journals (Sweden)

    Chintan A Trivedi

    2013-05-01

    Full Text Available Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed towards the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim-triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  18. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  19. The effects of a Korean computer-based cognitive rehabilitation program on cognitive function and visual perception ability of patients with acute stroke

    OpenAIRE

    Park, Jin-Hyuck; Park, Ji-Hyuk

    2015-01-01

    [Purpose] The purpose of this study is to investigate the effects of a Korean computer-based cognitive rehabilitation program (CBCR) on the cognitive function and visual perception ability of patients with acute stroke. [Subjects] The subjects were 30 patients with acute stroke. [Methods] The subjects were randomly assigned to either the experimental group (EG) or the control group (CG). The EG subjects received CBCR with the CoTras program. The CG subjects received conventional cognitive reh...

  20. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Directory of Open Access Journals (Sweden)

    Akitoshi Ogawa

    Full Text Available The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion. Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround, 3D with monaural sound (3D-Mono, 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG. The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life