WorldWideScience

Sample records for time visualization system

  1. Wide-area, real-time monitoring and visualization system

    Science.gov (United States)

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  2. Real-Time Visualization System for Deep-Sea Surveying

    Directory of Open Access Journals (Sweden)

    Yujie Li

    2014-01-01

    Full Text Available Remote robotic exploration holds vast potential for gaining knowledge about extreme environments, which is difficult to be accessed by humans. In the last two decades, various underwater devices were developed for detecting the mines and mine-like objects in the deep-sea environment. However, there are some problems in recent equipment, like poor accuracy of mineral objects detection, without real-time processing, and low resolution of underwater video frames. Consequently, the underwater objects recognition is a difficult task, because the physical properties of the medium, the captured video frames, are distorted seriously. In this paper, we are considering use of the modern image processing methods to determine the mineral location and to recognize the mineral actually within a little computation complex. We firstly analyze the recent underwater imaging models and propose a novel underwater optical imaging model, which is much closer to the light propagation model in the underwater environment. In our imaging system, we remove the electrical noise by dual-tree complex wavelet transform. And then we solve the nonuniform illumination of artificial lights by fast guided trilateral bilateral filter and recover the image color through automatic color equalization. Finally, a shape-based mineral recognition algorithm is proposed for underwater objects detection. These methods are designed for real-time execution on limited-memory platforms. This pipeline is suitable for detecting underwater objects in practice by our experiences. The initial results are presented and experiments demonstrate the effectiveness of the proposed real-time visualization system.

  3. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  4. Grasp: Tracing, visualizing and measuring the behavior of real-time systems

    NARCIS (Netherlands)

    Holenderski, M.J.; Heuvel, van den M.M.H.P.; Bril, R.J.; Lukkien, J.J.; Lipari, G.; Cucinotta, T.

    2010-01-01

    Understanding and validating the timing behavior of real-time systems is not trivial. Many real-time operating systems and their development environments do not provide tracing support, and provide only limited visualization, measurements and analysis tools. This paper presents Grasp, a tool for

  5. A web-mapping system for real-time visualization of the global terrain

    Science.gov (United States)

    Zhang, Liqiang; Yang, Chongjun; Liu, Donglin; Ren, Yingchao; Rui, Xiaoping

    2005-04-01

    In this paper, we mainly present a web-based 3D global terrain visualization application that provides more powerful transmission and visualization of global multiresolution data sets across networks. A client/server architecture is put forward. The paper also reports various relevant research work, such as efficient data compression methods to reduce the physical size of these data sets and accelerate network delivery, streaming transmission for progressively downloading data, and real-time multiresolution terrain surface visualization with a high visual quality by M-band wavelet transforms and a hierarchical triangulation technique. Finally, an experiment is performed using different levels of detailed data to verify that the system works appropriately.

  6. Technical note: real-time web-based wireless visual guidance system for radiotherapy.

    Science.gov (United States)

    Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho

    2017-06-01

    Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.

  7. Visual time series analysis

    DEFF Research Database (Denmark)

    Fischer, Paul; Hilbert, Astrid

    2012-01-01

    We introduce a platform which supplies an easy-to-handle, interactive, extendable, and fast analysis tool for time series analysis. In contrast to other software suits like Maple, Matlab, or R, which use a command-line-like interface and where the user has to memorize/look-up the appropriate...... commands, our application is select-and-click-driven. It allows to derive many different sequences of deviations for a given time series and to visualize them in different ways in order to judge their expressive power and to reuse the procedure found. For many transformations or model-ts, the user may...... choose between manual and automated parameter selection. The user can dene new transformations and add them to the system. The application contains efficient implementations of advanced and recent techniques for time series analysis including techniques related to extreme value analysis and filtering...

  8. Advanced Visualization System for Monitoring the ATLAS TDAQ Network in real-time

    CERN Document Server

    Batraneanu, S M; The ATLAS collaboration; Martin, B; Savu, D O; Stancu, S N; Leahu, L

    2012-01-01

    The trigger and data acquisition (TDAQ) system of the ATLAS experiment at CERN comprises approximately 2500 servers interconnected by three separate Ethernet networks, totaling 250 switches. Due to its real-time nature, there are additional requirements in comparison to conventional networks in terms of speed and performance. A comprehensive monitoring framework has been developed for expert use. However, non experts may experience difficulties in using it and interpreting data. Moreover, specific performance issues, such as single component saturation or unbalanced workload, need to be spotted with ease, in real-time, and understood in the context of the full system view. We addressed these issues by developing an innovative visualization system where the users benefit from the advantages of 3D graphics to visualize the large monitoring parameter space associated with our system. This has been done by developing a hierarchical model of the complete system onto which we overlaid geographical, logical and real...

  9. Development of real time visual evaluation system for sodium transient thermohydraulic experiments

    International Nuclear Information System (INIS)

    Tanigawa, Shingo

    1990-01-01

    A real time visual evaluation system, the Liquid Metal Visual Evaluation System (LIVES), has been developed for the Plant Dynamics Test Loop facility at O-arai Engineering Center. This facility is designed to provide sodium transient thermohydraulic experimental data not only in a fuel subassembly but also in a plant wide system simulating abnormal or accident conditions in liquid metal fast breeder reactors. Since liquid metal sodium is invisible, measurements to obtain experimental data are mainly conducted by numerous thermo couples installed at various locations in the test sections and the facility. The transient thermohydraulic phenomena are a result of complicated interactions among global and local scale three-dimensional phenomena, and short- and long-time scale phenomena. It is, therefore, difficult to grasp intuitively thermohydraulic behaviors and to observe accurately both temperature distribution and flow condition solely by digital data or various types of analog data in evaluating the experimental results. For effectively conducting sodium transient experiments and for making it possible to observe exactly thermohydraulic phenomena, the real time visualization technique for transient thermohydraulics has been developed using the latest Engineering Work Station. The system makes it possible to observe and compare instantly the experiment and analytical results while experiment or analysis is in progress. The results are shown by not only the time trend curves but also the graphic animations. This paper shows an outline of the system and sample applications of the system. (author)

  10. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  11. Visualization system on ITBL

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2004-01-01

    Visualization systems PATRAS/ITBL and AVS/ITBL, which are based on visualization software PATRAS and AVS/Express respectively, have been developed on a global, heterogeneous computing environment, Information Technology Based Laboratory (ITBL). PATRAS/ITBL allows for real-time visualization of the numerical results acquired from coupled multi-physics numerical simulations, executed on different hosts situated in remote locations. AVS/ITBL allow for post processing visualization. The scientific data located in remote sites may be selected and visualized on a web browser installed in a user terminal. The global structure and main functions of these systems are presented. (author)

  12. Real-time distortion correction for visual inspection systems based on FPGA

    Science.gov (United States)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  13. The Visual System

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home >> NEI for Kids >> The Visual System Listen All ... up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on ...

  14. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    Science.gov (United States)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  15. Development of volume rendering module for real-time visualization system

    International Nuclear Information System (INIS)

    Otani, Takayuki; Muramatsu, Kazuhiro

    2000-03-01

    Volume rendering is a method to visualize the distribution of physical quantities in the three dimensional space from any viewpoint by tracing the ray direction on the ordinary two dimensional monitoring display. It enables to provide the interior information as well as the surfacial one by producing the translucent images. Therefore, it is regarded as a very useful means as well as an important one in the analysis of the computational results of the scientific calculations, although it has, unfortunately, disadvantage to need a large amount of computing time. This report describes algorithm and its performance of the volume rendering soft-ware which was developed as an important functional module in the real-time visualization system PATRAS. This module can directly visualize the computed results on BFC grid. Moreover, it has already realized the speed-up in some parts of the software by the use of a newly developed heuristic technique. This report includes the investigation on the speed-up of the software by parallel processing. (author)

  16. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    Science.gov (United States)

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  17. A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks

    Directory of Open Access Journals (Sweden)

    Cemal Melih Tanis

    2018-06-01

    Full Text Available A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT, which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP. Processing features include GUI based selection of the region of interest (ROI, automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF, red fraction index (RF, blue fraction index (BF, green-red vegetation index (GRVI, and green excess (GEI index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland.

  18. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    Science.gov (United States)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  19. Arena3D: visualizing time-driven phenotypic differences in biological systems

    Directory of Open Access Journals (Sweden)

    Secrier Maria

    2012-03-01

    Full Text Available Abstract Background Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes. Results Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene lsm14a in cytokinesis is suggested. We also show how phenotypic

  20. The Visual System

    Medline Plus

    Full Text Available ... The Visual System Ever wonder how your eyes work? Watch this video to learn how you’re able to see the world around you. Did You Know? On average, you blink about 15 to 20 times every minute. That’s up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on ...

  1. Development of real time abdominal compression force monitoring and visual biofeedback system

    Science.gov (United States)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could

  2. The feasibility of an infrared system for real-time visualization and mapping of ultrasound fields

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Adam; Nunn, John, E-mail: adam.shaw@npl.co.u [National Physical Laboratory, Teddington, Middlesex, TW11 0LW (United Kingdom)

    2010-06-07

    In treatment planning for ultrasound therapy, it is desirable to know the 3D structure of the ultrasound field. However, mapping an ultrasound field in 3D is very slow, with even a single planar raster scan taking typically several hours. Additionally, hydrophones that are used for field mapping are expensive and can be damaged in some therapy fields. So there is value in rapid methods which enable visualization and mapping of the ultrasound field in about 1 min. In this note we explore the feasibility of mapping the intensity distribution by measuring the temperature distribution produced in a thin sheet of absorbing material. A 0.2 mm thick acetate sheet forms a window in the wall of a water tank containing the transducer. The window is oriented at 45{sup 0} to the beam axis, and the distance from the transducer to the window can be varied. The temperature distribution is measured with an infrared camera; thermal images of the inclined plane could be viewed in real time or images could be captured for later analysis and 3D field reconstruction. We conclude that infrared thermography can be used to gain qualitative information about ultrasound fields. Thermal images are easily visualized with good spatial and thermal resolutions (0.044 mm and 0.05 {sup 0}C in our system). The focus and field structure such as side lobes can be identified in real time from the direct video output. 3D maps and image planes at arbitrary orientations to the beam axis can be obtained and reconstructed within a few minutes. In this note we are primarily interested in the technique for characterization of high intensity focused ultrasound (HIFU) fields, but other applications such as physiotherapy fields are also possible. (note)

  3. The feasibility of an infrared system for real-time visualization and mapping of ultrasound fields

    International Nuclear Information System (INIS)

    Shaw, Adam; Nunn, John

    2010-01-01

    In treatment planning for ultrasound therapy, it is desirable to know the 3D structure of the ultrasound field. However, mapping an ultrasound field in 3D is very slow, with even a single planar raster scan taking typically several hours. Additionally, hydrophones that are used for field mapping are expensive and can be damaged in some therapy fields. So there is value in rapid methods which enable visualization and mapping of the ultrasound field in about 1 min. In this note we explore the feasibility of mapping the intensity distribution by measuring the temperature distribution produced in a thin sheet of absorbing material. A 0.2 mm thick acetate sheet forms a window in the wall of a water tank containing the transducer. The window is oriented at 45 0 to the beam axis, and the distance from the transducer to the window can be varied. The temperature distribution is measured with an infrared camera; thermal images of the inclined plane could be viewed in real time or images could be captured for later analysis and 3D field reconstruction. We conclude that infrared thermography can be used to gain qualitative information about ultrasound fields. Thermal images are easily visualized with good spatial and thermal resolutions (0.044 mm and 0.05 0 C in our system). The focus and field structure such as side lobes can be identified in real time from the direct video output. 3D maps and image planes at arbitrary orientations to the beam axis can be obtained and reconstructed within a few minutes. In this note we are primarily interested in the technique for characterization of high intensity focused ultrasound (HIFU) fields, but other applications such as physiotherapy fields are also possible. (note)

  4. Earth History databases and visualization - the TimeScale Creator system

    Science.gov (United States)

    Ogg, James; Lugowski, Adam; Gradstein, Felix

    2010-05-01

    The "TimeScale Creator" team (www.tscreator.org) and the Subcommission on Stratigraphic Information (stratigraphy.science.purdue.edu) of the International Commission on Stratigraphy (www.stratigraphy.org) has worked with numerous geoscientists and geological surveys to prepare reference datasets for global and regional stratigraphy. All events are currently calibrated to Geologic Time Scale 2004 (Gradstein et al., 2004, Cambridge Univ. Press) and Concise Geologic Time Scale (Ogg et al., 2008, Cambridge Univ. Press); but the array of intercalibrations enable dynamic adjustment to future numerical age scales and interpolation methods. The main "global" database contains over 25,000 events/zones from paleontology, geomagnetics, sea-level and sequence stratigraphy, igneous provinces, bolide impacts, plus several stable isotope curves and image sets. Several regional datasets are provided in conjunction with geological surveys, with numerical ages interpolated using a similar flexible inter-calibration procedure. For example, a joint program with Geoscience Australia has compiled an extensive Australian regional biostratigraphy and a full array of basin lithologic columns with each formation linked to public lexicons of all Proterozoic through Phanerozoic basins - nearly 500 columns of over 9,000 data lines plus hot-curser links to oil-gas reference wells. Other datapacks include New Zealand biostratigraphy and basin transects (ca. 200 columns), Russian biostratigraphy, British Isles regional stratigraphy, Gulf of Mexico biostratigraphy and lithostratigraphy, high-resolution Neogene stable isotope curves and ice-core data, human cultural episodes, and Circum-Arctic stratigraphy sets. The growing library of datasets is designed for viewing and chart-making in the free "TimeScale Creator" JAVA package. This visualization system produces a screen display of the user-selected time-span and the selected columns of geologic time scale information. The user can change the

  5. Customizable Time-Oriented Visualizations

    DEFF Research Database (Denmark)

    Kuhail, Mohammad Amin; Pantazos, Kostas; Lauesen, Søren

    2012-01-01

    Most commercial visualization tools support an easy and quick creation of conventional time-oriented visualizations such as line charts, but customization is limited. In contrast, some academic visualization tools and programming languages support the creation of some customizable time......-oriented visualizations but it is time consuming and hard. To combine efficiency, the effort required to develop a visualization, and customizability, the ability to tailor a visualization, we developed time-oriented building blocks that address the specifics of time (e.g. linear vs. cyclic or point-based vs. interval......-based) and consist of inner customizable parts (e.g. ticks). A combination of the time-oriented and other primitive graphical building blocks allowed the creation of several customizable advanced time-oriented visualizations. The appearance and behavior of the blocks are specified using spreadsheet-like formulas. We...

  6. The Visual System

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home » NEI for Kids » The Visual System Listen All ... up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on ...

  7. Visualization of particle trajectories in time-varying electromagnetic fields by CAVE-type virtual reality system

    International Nuclear Information System (INIS)

    Ohno, Nobuaki; Ohtani, Hiroaki; Horiuchi, Ritoku; Matsuoka, Daisuke

    2012-01-01

    The particle kinetic effects play an important role in breaking the frozen-in condition and exciting collisionless magnetic reconnection in high temperature plasmas. Because this effect is originating from a complex thermal motion near reconnection point, it is very important to examine particle trajectories using scientific visualization technique, especially in the presence of plasma instability. We developed interactive visualization environment for the particle trajectories in time-varying electromagnetic fields in the CAVE-type virtual reality system based on VFIVE, which is interactive visualization software for the CAVE system. From the analysis of ion trajectories using the particle simulation data, it was found that time-varying electromagnetic fields around the reconnection region accelerate ions toward the downstream region. (author)

  8. Identification of real-time diagnostic measures of visual distraction with an automatic eye-tracking system.

    Science.gov (United States)

    Zhang, Harry; Smith, Matthew R H; Witt, Gerald J

    2006-01-01

    This study was conducted to identify eye glance measures that are diagnostic of visual distraction. Visual distraction degrades performance, but real-time diagnostic measures have not been identified. In a driving simulator, 14 participants responded to a lead vehicle braking at -2 or -2.7 m/s2 periodically while reading a varying number of words (6-15 words every 13 s) on peripheral displays (with diagonal eccentricities of 24 degrees, 43 degrees, and 75 degrees). As the number of words and display eccentricity increased, total glance duration and reaction time increased and driving performance suffered. Correlation coefficients between several glance measures and reaction time or performance variables were reliably high, indicating that these glance measures are diagnostic of visual distraction. It is predicted that for every 25% increase in total glance duration, reaction time is increased by 0.39 s and standard deviation of lane position is increased by 0.06 m. Potential applications of this research include assessing visual distraction in real time, delivering advisories to distracted drivers to reorient their attention to driving, and using distraction information to adapt forward collision and lane departure warning systems to enhance system effectiveness.

  9. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Science.gov (United States)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  10. On a new visualization tool for quantum systems and on a time-optimal control problem for quantum gates

    International Nuclear Information System (INIS)

    Garon, Ariane

    2014-01-01

    Since the foundations of quantum physics have been laid, our knowledge of it never ceased to grow and this field of science naturally split into diverse specialized branches. The first part of this thesis focuses on a problem which concerns all branches of quantum physics, which is the visualization of quantum systems. The non-intuitive aspect of quantum physics justifies a shared desire to visualize quantum systems. In the present work, we develop a method to visualize any operators in these systems, including in particular state operators (density matrices), Hamiltonians and propagators. The method, referred to as DROPS (Discrete Representation of spin OPeratorS), is based on a generalization of Wigner representations, presented in this document. The resulting visualization of an operator A is called its DROPS representation or visualization. We demonstrate its intuitive character by illustrating a series of concepts in nuclear magnetic resonance (NMR) spectroscopy for systems consisting of two spin-1/2 particles. The second part of this thesis is concerned with a problem of optimal control which finds applications in the fields of NMR spectroscopy, medical imagery and quantum computing, to cite a few. The problem of creating a propagator in the shortest amount of time is considered, and the results are extended to solve the closely related problem of creating rotations in the smallest amount of time. The approach used here differs from the previous results on the subject by solving the problem using the Pontryagin's maximum principle and by its detailed consideration of singular controls and trajectories.

  11. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Science.gov (United States)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  12. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    Science.gov (United States)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  13. Real-time visualization of magnetic flux densities for transcranial magnetic stimulation on commodity and fully immersive VR systems

    Science.gov (United States)

    Kalivarapu, Vijay K.; Serrate, Ciro; Hadimani, Ravi L.

    2017-05-01

    Transcranial Magnetic Stimulation (TMS) is a non-invasive procedure that uses time varying short pulses of magnetic fields to stimulate nerve cells in the brain. In this method, a magnetic field generator ("TMS coil") produces small electric fields in the region of the brain via electromagnetic induction. This technique can be used to excite or inhibit firing of neurons, which can then be used for treatment of various neurological disorders such as Parkinson's disease, stroke, migraine, and depression. It is however challenging to focus the induced electric field from TMS coils to smaller regions of the brain. Since electric and magnetic fields are governed by laws of electromagnetism, it is possible to numerically simulate and visualize these fields to accurately determine the site of maximum stimulation and also to develop TMS coils that can focus the fields on the targeted regions. However, current software to compute and visualize these fields are not real-time and can work for only one position/orientation of TMS coil, severely limiting their usage. This paper describes the development of an application that computes magnetic flux densities (h-fields) and visualizes their distribution for different TMS coil position/orientations in real-time using GPU shaders. The application is developed for desktop, commodity VR (HTC Vive), and fully immersive VR CAVETM systems, for use by researchers, scientists, and medical professionals to quickly and effectively view the distribution of h-fields from MRI brain scans.

  14. VAiRoma: A Visual Analytics System for Making Sense of Places, Times, and Events in Roman History.

    Science.gov (United States)

    Cho, Isaac; Dou, Wewnen; Wang, Derek Xiaoyu; Sauda, Eric; Ribarsky, William

    2016-01-01

    Learning and gaining knowledge of Roman history is an area of interest for students and citizens at large. This is an example of a subject with great sweep (with many interrelated sub-topics over, in this case, a 3,000 year history) that is hard to grasp by any individual and, in its full detail, is not available as a coherent story. In this paper, we propose a visual analytics approach to construct a data driven view of Roman history based on a large collection of Wikipedia articles. Extracting and enabling the discovery of useful knowledge on events, places, times, and their connections from large amounts of textual data has always been a challenging task. To this aim, we introduce VAiRoma, a visual analytics system that couples state-of-the-art text analysis methods with an intuitive visual interface to help users make sense of events, places, times, and more importantly, the relationships between them. VAiRoma goes beyond textual content exploration, as it permits users to compare, make connections, and externalize the findings all within the visual interface. As a result, VAiRoma allows users to learn and create new knowledge regarding Roman history in an informed way. We evaluated VAiRoma with 16 participants through a user study, with the task being to learn about roman piazzas through finding relevant articles and new relationships. Our study results showed that the VAiRoma system enables the participants to find more relevant articles and connections compared to Web searches and literature search conducted in a roman library. Subjective feedback on VAiRoma was also very positive. In addition, we ran two case studies that demonstrate how VAiRoma can be used for deeper analysis, permitting the rapid discovery and analysis of a small number of key documents even when the original collection contains hundreds of thousands of documents.

  15. The Visual System

    Medline Plus

    Full Text Available ... Glossary The Visual System Your Eyes’ Natural Defenses Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear Sports and Your Eyes Fun Stuff Cool Eye Tricks Links to More ...

  16. The Visual System

    Medline Plus

    Full Text Available ... The Visual System Your Eyes’ Natural Defenses Eye Health and Safety First Aid Tips Healthy Vision Tips ... addressed to the NEI Website Manager . Department of Health and Human Services | The National Institutes of Health | ...

  17. The Visual System

    Medline Plus

    Full Text Available ... Home » NEI for Kids » The Visual System Listen All About Vision About the Eye Ask a Scientist ... learn how you’re able to see the world around you. Did You Know? On average, you ...

  18. The Visual System

    Medline Plus

    Full Text Available ... for Kids >> The Visual System Listen All About Vision About the Eye Ask a Scientist Video Series ... Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear Sports and Your Eyes Fun ...

  19. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    Science.gov (United States)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  20. Irreversible data compression concepts with polynomial fitting in time-order of particle trajectory for visualization of huge particle system

    International Nuclear Information System (INIS)

    Ohtani, H; Ito, A M; Hagita, K; Kato, T; Saitoh, T; Takeda, T

    2013-01-01

    We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept ''TOKI (Time-Order Kinetic Irreversible compression)''. In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data

  1. The Visual System

    Medline Plus

    Full Text Available ... First Aid Tips Healthy Vision Tips Protective Eyewear Sports and Your Eyes Fun Stuff Cool Eye Tricks Links to More Information Optical Illusions Printables The Visual System Ever wonder how your eyes work? Watch this video to learn how you’re able ...

  2. Visual management support system

    Science.gov (United States)

    Lee Anderson; Jerry Mosier; Geoffrey Chandler

    1979-01-01

    The Visual Management Support System (VMSS) is an extension of an existing computer program called VIEWIT, which has been extensively used by the U. S. Forest Service. The capabilities of this program lie in the rapid manipulation of large amounts of data, specifically opera-ting as a tool to overlay or merge one set of data with another. VMSS was conceived to...

  3. The Visual System

    Medline Plus

    Full Text Available ... Fun Stuff Cool Eye Tricks Links to More Information Optical Illusions Printables The Visual System Ever wonder how your eyes work? Watch this ... Policies and Other Important Links NEI Employee Emergency Information NEI ... | USA.gov NIH…Turning Discovery Into Health ®

  4. The Visual System

    Medline Plus

    Full Text Available ... Fun Stuff Cool Eye Tricks Links to More Information Optical Illusions Printables The Visual System Ever wonder how your eyes work? Watch this video to learn how you’re able to see the world around you. Did You Know? On ... on Social Media Information in Spanish (Información en español) Website, Social Media ...

  5. The Visual System

    Medline Plus

    Full Text Available ... Fun Stuff Cool Eye Tricks Links to More Information Optical Illusions Printables The Visual System Ever wonder how your eyes work? Watch this video to learn how you’re able to see the world around you. Did ... on Social Media Information in Spanish (Información en español) Website, Social Media ...

  6. Student Real-Time Visualization System in Classroom Using RFID Based on UTAUT Model

    Science.gov (United States)

    Raja Yusof, Raja Jamilah; Qazi, Atika; Inayat, Irum

    2017-01-01

    Purpose: The purpose of this paper is to monitor in-class activities and the performance of the students. Design/methodology/approach: A pilot study was conducted to evaluate the proposed system using a questionnaire with 132 participants (teachers and non-teachers) in a presentation style to record the participant's perception about performance…

  7. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    OpenAIRE

    Massimiliano Giulioni; Federico Corradi; Vittorio Dante; Paolo del Giudice

    2015-01-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction compri...

  8. Analysis of Time and Space Invariance of BOLD Responses in the Rat Visual System

    DEFF Research Database (Denmark)

    Bailey, Christopher; Sanganahalli, Basavaraju G; Herman, Peter

    2012-01-01

    Neuroimaging studies of functional magnetic resonance imaging (fMRI) and electrophysiology provide the linkage between neural activity and the blood oxygenation level-dependent (BOLD) response. Here, BOLD responses to light flashes were imaged at 11.7T and compared with neural recordings from...... for general linear modeling (GLM) of BOLD responses. Light flashes induced high magnitude neural/BOLD responses reproducibly from both regions. However, neural/BOLD responses from SC and V1 were markedly different. SC signals followed the boxcar shape of the stimulation paradigm at all flash rates, whereas V1...... signals were characterized by onset/offset transients that exhibited different flash rate dependencies. We find that IRF(SC) is generally time-invariant across wider flash rate range compared with IRF(V1), whereas IRF(SC) and IRF(V1) are both space invariant. These results illustrate the importance...

  9. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  10. GVS - GENERAL VISUALIZATION SYSTEM

    Science.gov (United States)

    Keith, S. R.

    1994-01-01

    The primary purpose of GVS (General Visualization System) is to support scientific visualization of data output by the panel method PMARC_12 (inventory number ARC-13362) on the Silicon Graphics Iris computer. GVS allows the user to view PMARC geometries and wakes as wire frames or as light shaded objects. Additionally, geometries can be color shaded according to phenomena such as pressure coefficient or velocity. Screen objects can be interactively translated and/or rotated to permit easy viewing. Keyframe animation is also available for studying unsteady cases. The purpose of scientific visualization is to allow the investigator to gain insight into the phenomena they are examining, therefore GVS emphasizes analysis, not artistic quality. GVS uses existing IRIX 4.0 image processing tools to allow for conversion of SGI RGB files to other formats. GVS is a self-contained program which contains all the necessary interfaces to control interaction with PMARC data. This includes 1) the GVS Tool Box, which supports color histogram analysis, lighting control, rendering control, animation, and positioning, 2) GVS on-line help, which allows the user to access control elements and get information about each control simultaneously, and 3) a limited set of basic GVS data conversion filters, which allows for the display of data requiring simpler data formats. Specialized controls for handling PMARC data include animation and wakes, and visualization of off-body scan volumes. GVS is written in C-language for use on SGI Iris series computers running IRIX. It requires 28Mb of RAM for execution. Two separate hardcopy documents are available for GVS. The basic document price for ARC-13361 includes only the GVS User's Manual, which outlines major features of the program and provides a tutorial on using GVS with PMARC_12 data. Programmers interested in modifying GVS for use with data in formats other than PMARC_12 format may purchase a copy of the draft GVS 3.1 Software Maintenance

  11. The Visual System

    Medline Plus

    Full Text Available ... to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the special health problems and requirements of the blind.” ... Clinical Studies Publications Catalog Photos ...

  12. The Visual System

    Medline Plus

    Full Text Available ... National Eye Institute’s mission is to “conduct and support research, training, health information dissemination, and other programs with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, ...

  13. The Visual System

    Medline Plus

    Full Text Available ... NIH), the National Eye Institute’s mission is to “conduct and support research, training, health information dissemination, and other programs with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of ...

  14. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Christopher J [Los Alamos National Laboratory; Ahrens, James P [Los Alamos National Laboratory; Wang, Jun [UCF

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar to other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.

  15. Peripheral visual response time and visual display layout

    Science.gov (United States)

    Haines, R. F.

    1974-01-01

    Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.

  16. The Visual System

    Medline Plus

    Full Text Available ... with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the ... Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  17. The Visual System

    Medline Plus

    Full Text Available ... to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the special health problems and requirements of ... Pressroom Contacts Dustin Hays - Chief, Science Communication dustin.hays@nih.gov Kathryn DeMott, Media Relations ...

  18. The Visual System

    Medline Plus

    Full Text Available ... programs with respect to blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the special health ... Pressroom Contacts Dustin Hays - Chief, Science Communication dustin.hays@nih.gov Kathryn DeMott, Media Relations ...

  19. The Visual System

    Medline Plus

    Full Text Available ... blinding eye diseases, visual disorders, mechanisms of visual function, preservation of sight, and the special health problems and requirements of the blind.” ... DeMott, Media Relations Kathryn.DeMott@nih.gov NEI Office of Communications ( ...

  20. Plataforma de desarrollo para el control en tiempo real de estructuras cinemáticas con realimentación visual//Platform to develop real time visual servoing control in kinematics systems

    Directory of Open Access Journals (Sweden)

    René González-Rodríguez

    2012-09-01

    Full Text Available En este trabajo se presenta una plataforma de desarrollo para el control en tiempo real de estructuras cinemáticas con realimentación visual. Se ha diseñado una configuración genérica que permite la implementación de cualquier variante de control visual. Para el procesamiento de la imagen se ha propuesto una estrategia que permite el uso de diferentes herramientas comerciales o algoritmos propiospara la captura y extracción de características de la imagen. El uso de Real Time Work Shop y Real Time Windows Target en el lazo de control interno brinda la posibilidad de implementar algoritmos de control servovisual en tiempo real. Al final del trabajo se presentan los resultados de un esquema de controlservovisual aplicado en un manipulador industrial. La plataforma propuesta constituye una herramienta de desarrollo para aplicaciones industriales de control servovisual y sirve de apoyo a la enseñanza de la mecatrónica en pregrado y postgrado.Palabras claves: control servovisual, control en tiempo real, estructuras cinemáticas._______________________________________________________________________________AbstractIn this work we propose a platform to develop visual servoing control systems. The platform has a generic design with the possibility to implement direct or look and move visual servoing systems. For the image processing we present a generic design allowing the use of any image processing library like Matrox MIL,Intel IPP, OpenCV or any algorithms for image capture and target characteristics extraction. The uses of Real Time Work Shop and Real Time Windows Target in the internal loop permits modify the control structure in SIMULINK very easy.Key words: visual servoing, real time control, kinematics systems.

  1. The Visual System

    Medline Plus

    Full Text Available ... of visual function, preservation of sight, and the special health problems and requirements of the blind.” News & ... Emily Y. Chew, M.D., Deputy Clinical Director Education Programs National Eye Health Education Program (NEHEP) Diabetic ...

  2. The Visual System

    Medline Plus

    Full Text Available ... visual function, preservation of sight, and the special health problems and requirements of the blind.” News & Events Events ... maintained by the NEI Office of Science Communications, Public Liaison, and Education. ... of Health and Human Services | The National Institutes of Health | ...

  3. Space-Time Disarray and Visual Awareness

    Directory of Open Access Journals (Sweden)

    Jan Koenderink

    2012-04-01

    Full Text Available Local space-time scrambling of optical data leads to violent jerks and dislocations. On masking these, visual awareness of the scene becomes cohesive, with dislocations discounted as amodally occluding foreground. Such cohesive space-time of awareness is technically illusory because ground truth is jumbled whereas awareness is coherent. Apparently the visual field is a construction rather than a (veridical perception.

  4. Musician Map: visualizing music collaborations over time

    Science.gov (United States)

    Yim, Ji-Dong; Shaw, Chris D.; Bartram, Lyn

    2009-01-01

    In this paper we introduce Musician Map, a web-based interactive tool for visualizing relationships among popular musicians who have released recordings since 1950. Musician Map accepts search terms from the user, and in turn uses these terms to retrieve data from MusicBrainz.org and AudioScrobbler.net, and visualizes the results. Musician Map visualizes relationships of various kinds between music groups and individual musicians, such as band membership, musical collaborations, and linkage to other artists that are generally regarded as being similar in musical style. These relationships are plotted between artists using a new timeline-based visualization where a node in a traditional node-link diagram has been transformed into a Timeline-Node, which allows the visualization of an evolving entity over time, such as the membership in a band. This allows the user to pursue social trend queries such as "Do Hip-Hop artists collaborate differently than Rock artists".

  5. Development of a High Resolution, Real Time, Distribution-Level Metering System and Associated Visualization, Modeling, and Data Analysis Functions

    Energy Technology Data Exchange (ETDEWEB)

    Bank, J.; Hambrick, J.

    2013-05-01

    NREL is developing measurement devices and a supporting data collection network specifically targeted at electrical distribution systems to support research in this area. This paper describes the measurement network which is designed to apply real-time and high speed (sub-second) measurement principles to distribution systems that are already common for the transmission level in the form of phasor measurement units and related technologies.

  6. Visualizing Mobility of Public Transportation System.

    Science.gov (United States)

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  7. BioCichlid: central dogma-based 3D visualization system of time-course microarray data on a hierarchical biological network.

    Science.gov (United States)

    Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi

    2009-02-15

    BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.

  8. The GEANT4 Visualization System

    International Nuclear Information System (INIS)

    Allison, J

    2007-01-01

    The Geant4 Visualization System is a multi-driver graphics system designed to serve the Geant4 Simulation Toolkit. It is aimed at the visualization of Geant4 data, primarily detector descriptions and simulated particle trajectories and hits. It can handle a variety of graphical technologies simultaneously and interchangeably, allowing the user to choose the visual representation most appropriate to requirements. It conforms to the low-level Geant4 abstract graphical user interfaces and introduces new abstract classes from which the various drivers are derived and that can be straightforwardly extended, for example, by the addition of a new driver. It makes use of an extendable class library of models and filters for data representation and selection. The Geant4 Visualization System supports a rich set of interactive commands based on the Geant4 command system. It is included in the Geant4 code distribution and maintained and documented like other components of Geant4

  9. Scientific & Intelligence Exascale Visualization Analysis System

    Energy Technology Data Exchange (ETDEWEB)

    2017-07-14

    SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.

  10. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    Science.gov (United States)

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  11. Using Visualization in Cockpit Decision Support Systems

    Energy Technology Data Exchange (ETDEWEB)

    Aragon, Cecilia R.

    2005-07-01

    In order to safely operate their aircraft, pilots must makerapid decisions based on integrating and processing large amounts ofheterogeneous information. Visual displays are often the most efficientmethod of presenting safety-critical data to pilots in real time.However, care must be taken to ensure the pilot is provided with theappropriate amount of information to make effective decisions and notbecome cognitively overloaded. The results of two usability studies of aprototype airflow hazard visualization cockpit decision support systemare summarized. The studies demonstrate that such a system significantlyimproves the performance of helicopter pilots landing under turbulentconditions. Based on these results, design principles and implicationsfor cockpit decision support systems using visualization arepresented.

  12. Visual pattern discovery in timed event data

    Science.gov (United States)

    Schaefer, Matthias; Wanner, Franz; Mansmann, Florian; Scheible, Christian; Stennett, Verity; Hasselrot, Anders T.; Keim, Daniel A.

    2011-01-01

    Business processes have tremendously changed the way large companies conduct their business: The integration of information systems into the workflows of their employees ensures a high service level and thus high customer satisfaction. One core aspect of business process engineering are events that steer the workflows and trigger internal processes. Strict requirements on interval-scaled temporal patterns, which are common in time series, are thereby released through the ordinal character of such events. It is this additional degree of freedom that opens unexplored possibilities for visualizing event data. In this paper, we present a flexible and novel system to find significant events, event clusters and event patterns. Each event is represented as a small rectangle, which is colored according to categorical, ordinal or intervalscaled metadata. Depending on the analysis task, different layout functions are used to highlight either the ordinal character of the data or temporal correlations. The system has built-in features for ordering customers or event groups according to the similarity of their event sequences, temporal gap alignment and stacking of co-occurring events. Two characteristically different case studies dealing with business process events and news articles demonstrate the capabilities of our system to explore event data.

  13. The Timing of Visual Object Categorization

    Science.gov (United States)

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  14. A virtual reality-based method of decreasing transmission time of visual feedback for a tele-operative robotic catheter operating system.

    Science.gov (United States)

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-03-01

    An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.

  15. The Visual System

    Medline Plus

    Full Text Available ... times every minute. That’s up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, Social Media Policies and Other Important Links NEI Employee Emergency Information NEI Intranet (Employees ...

  16. The Visual System

    Medline Plus

    Full Text Available ... Us Visiting the NIH Campus Mission Statement As part of the federal government’s National Institutes of Health ( ... On average, you blink about 15 to 20 times every minute. That’s up to 28,800 times ...

  17. The Visual System

    Medline Plus

    Full Text Available ... the Scientific Director Office of the Clinical Director Laboratories, Sections and Units Division of Epidemiology and Clinical ... System Your Eyes’ Natural Defenses Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear ...

  18. The Visual System

    Medline Plus

    Full Text Available ... System Your Eyes’ Natural Defenses Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear ... Social Media Policies and Other Important Links NEI Employee Emergency Information NEI Intranet (Employees Only) *PDF files ...

  19. SUBSURFACE VISUAL ALARM SYSTEM ANALYSIS

    International Nuclear Information System (INIS)

    D.W. Markman

    2001-01-01

    The ''Subsurface Fire Hazard Analysis'' (CRWMS M andO 1998, page 61), and the document, ''Title III Evaluation Report for the Surface and Subsurface Communication System'', (CRWMS M andO 1999a, pages 21 and 23), both indicate the installed communication system is adequate to support Exploratory Studies Facility (ESF) activities with the exception of the mine phone system for emergency notification purposes. They recommend the installation of a visual alarm system to supplement the page/party phone system The purpose of this analysis is to identify data communication highway design approaches, and provide justification for the selected or recommended alternatives for the data communication of the subsurface visual alarm system. This analysis is being prepared to document a basis for the design selection of the data communication method. This analysis will briefly describe existing data or voice communication or monitoring systems within the ESF, and look at how these may be revised or adapted to support the needed data highway of the subsurface visual alarm. system. The existing PLC communication system installed in subsurface is providing data communication for alcove No.5 ventilation fans, south portal ventilation fans, bulkhead doors and generator monitoring system. It is given that the data communication of the subsurface visual alarm system will be a digital based system. It is also given that it is most feasible to take advantage of existing systems and equipment and not consider an entirely new data communication system design and installation. The scope and primary objectives of this analysis are to: (1) Briefly review and describe existing available data communication highways or systems within the ESF. (2) Examine technical characteristics of an existing system to disqualify a design alternative is paramount in minimizing the number of and depth of a system review. (3) Apply general engineering design practices or criteria such as relative cost, and degree

  20. Visual prediction: psychophysics and neurophysiology of compensation for time delays.

    Science.gov (United States)

    Nijhawan, Romi

    2008-04-01

    A necessary consequence of the nature of neural transmission systems is that as change in the physical state of a time-varying event takes place, delays produce error between the instantaneous registered state and the external state. Another source of delay is the transmission of internal motor commands to muscles and the inertia of the musculoskeletal system. How does the central nervous system compensate for these pervasive delays? Although it has been argued that delay compensation occurs late in the motor planning stages, even the earliest visual processes, such as phototransduction, contribute significantly to delays. I argue that compensation is not an exclusive property of the motor system, but rather, is a pervasive feature of the central nervous system (CNS) organization. Although the motor planning system may contain a highly flexible compensation mechanism, accounting not just for delays but also variability in delays (e.g., those resulting from variations in luminance contrast, internal body temperature, muscle fatigue, etc.), visual mechanisms also contribute to compensation. Previous suggestions of this notion of "visual prediction" led to a lively debate producing re-examination of previous arguments, new analyses, and review of the experiments presented here. Understanding visual prediction will inform our theories of sensory processes and visual perception, and will impact our notion of visual awareness.

  1. Visualizing complex (hydrological) systems with correlation matrices

    Science.gov (United States)

    Haas, J. C.

    2016-12-01

    When trying to understand or visualize the connections of different aspects of a complex system, this often requires deeper understanding to start with, or - in the case of geo data - complicated GIS software. To our knowledge, correlation matrices have rarely been used in hydrology (e.g. Stoll et al., 2011; van Loon and Laaha, 2015), yet they do provide an interesting option for data visualization and analysis. We present a simple, python based way - using a river catchment as an example - to visualize correlations and similarities in an easy and colorful way. We apply existing and easy to use python packages from various disciplines not necessarily linked to the Earth sciences and can thus quickly show how different aquifers work or react, and identify outliers, enabling this system to also be used for quality control of large datasets. Going beyond earlier work, we add a temporal and spatial element, enabling us to visualize how a system reacts to local phenomena such as for example a river, or changes over time, by visualizing the passing of time in an animated movie. References: van Loon, A.F., Laaha, G.: Hydrological drought severity explained by climate and catchment characteristics, Journal of Hydrology 526, 3-14, 2015, Drought processes, modeling, and mitigation Stoll, S., Hendricks Franssen, H. J., Barthel, R., Kinzelbach, W.: What can we learn from long-term groundwater data to improve climate change impact studies?, Hydrology and Earth System Sciences 15(12), 3861-3875, 2011

  2. Visualizing Human Migration Trhough Space and Time

    Science.gov (United States)

    Zambotti, G.; Guan, W.; Gest, J.

    2015-07-01

    Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.

  3. Using Visualization in Cockpit Decision Support Systems

    Science.gov (United States)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  4. Processing of visually presented clock times.

    Science.gov (United States)

    Goolkasian, P; Park, D C

    1980-11-01

    The encoding and representation of visually presented clock times was investigated in three experiments utilizing a comparative judgment task. Experiment 1 explored the effects of comparing times presented in different formats (clock face, digit, or word), and Experiment 2 examined angular distance effects created by varying positions of the hands on clock faces. In Experiment 3, encoding and processing differences between clock faces and digitally presented times were directly measured. Same/different reactions to digitally presented times were faster than to times presented on a clock face, and this format effect was found to be a result of differences in processing that occurred after encoding. Angular separation also had a limited effect on processing. The findings are interpreted within the framework of theories that refer to the importance of representational codes. The applicability to the data of Bank's semantic-coding theory, Paivio's dual-coding theory, and the levels-of-processing view of memory are discussed.

  5. Visualization system of swirl motion

    International Nuclear Information System (INIS)

    Nakayama, K.; Umeda, K.; Ichikawa, T.; Nagano, T.; Sakata, H.

    2004-01-01

    The instrumentation of a system composed of an experimental device and numerical analysis is presented to visualize flow and identify swirling motion. Experiment is performed with transparent material and PIV (Particle Image Velocimetry) instrumentation, by which velocity vector field is obtained. This vector field is then analyzed numerically by 'swirling flow analysis', which estimates its velocity gradient tensor and the corresponding eigenvalue (swirling function). Since an instantaneous flow field in steady/unsteady states is captured by PIV, the flow field is analyzed, and existence of vortices or swirling motions and their locations are identified in spite of their size. In addition, intensity of swirling is evaluated. The analysis enables swirling motion to emerge, even though it is hidden in uniform flow and velocity filed does not indicate any swirling. This visualization system can be applied to investigate condition to control flow or design flow. (authors)

  6. Real-time crowd density mapping using a novel sensory fusion model of infrared and visual systems

    OpenAIRE

    Yaseen, S; Al-Habaibeh, A; Su, D; Otham, F

    2013-01-01

    Crowd dynamic management research has seen significant attention in recent years in research and industry in an attempt to improve safety level and management of large scale events and in large public places such as stadiums, theatres, railway stations, subways and other places where high flow of people at high densities is expected. Failure to detect the crowd behaviour at the right time could lead to unnecessary injuries and fatalities. Over the past decades there have been many incidents o...

  7. Unfolding Visual Lexical Decision in Time

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  8. Specialized Computer Systems for Environment Visualization

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  9. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    Science.gov (United States)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  10. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  11. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  12. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  13. Public health nurse perceptions of Omaha System data visualization.

    Science.gov (United States)

    Lee, Seonah; Kim, Era; Monsen, Karen A

    2015-10-01

    Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Visualization System for Monitoring Data Management Systems

    Directory of Open Access Journals (Sweden)

    Emanuel Pinho

    2016-11-01

    Full Text Available Usually, a Big Data system has a monitoring system for performance evaluation and error prevention. There are some disadvantages in the way that these tools display the information and its targeted approach to physical components. The main goal is to study visual and interactive mechanisms that allow the representation of monitoring data in grid computing environments, providing the end-user information, which can contribute objectively to the system analysis. This paper is an extension of the paper presented at (Pinho and Carvalho 2016 and has the purpose to present the state of the art, carries out the proposed solution and present the achieved goals.

  15. Visual working memory as visual attention sustained internally over time.

    Science.gov (United States)

    Chun, Marvin M

    2011-05-01

    Visual working memory and visual attention are intimately related, such that working memory encoding and maintenance reflects actively sustained attention to a limited number of visual objects and events important for ongoing cognition and action. Although attention is typically considered to operate over perceptual input, a recent taxonomy proposes to additionally consider how attention can be directed to internal perceptual representations in the absence of sensory input, as well as other internal memories, choices, and thoughts (Chun, Golomb, & Turk-Browne, 2011). Such internal attention enables prolonged binding of features into integrated objects, along with enhancement of relevant sensory mechanisms. These processes are all limited in capacity, although different types of working memory and attention, such as spatial vs. object processing, operate independently with separate capacity. Overall, the success of maintenance depends on the ability to inhibit both external (perceptual) and internal (cognitive) distraction. Working memory is the interface by which attentional mechanisms select and actively maintain relevant perceptual information from the external world as internal representations within the mind. Copyright © 2011. Published by Elsevier Ltd.

  16. Informatics solutions for Three-dimensional visualization in real time

    International Nuclear Information System (INIS)

    Guzman Montoto, Jose Ignacio

    2002-01-01

    The advances reached in the development of the hardware and in the methods of acquisition of data like tomographic scanners and systems of analysis of images, have allowed obtaining geometric models of biomedical elements with the property of being manipulated through the three-dimensional visualization (3D). Nowadays, this visualization embraces from biological applications, including analysis of structures and its functional relationships, until medical applications that include anatomical accuracies and the planning or the training for complex surgical operations. This work proposes computer solutions to satisfy visualization requirements in real time. The developed algorithms are contained in a graphic library that will facilitate the development of future works. The obtained results allow facing current problems of three-dimensional representation of complex surfaces, realism is reached in the images and they have possible application in bioinformatics and medicine

  17. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    Science.gov (United States)

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  18. Visual assistance system for cyclotron operation

    International Nuclear Information System (INIS)

    Okamura, Tetsuya; Tachikawa, Toshiki; Murakami, Tohru.

    1994-01-01

    A computer-based operation system for a cyclotron which assists operators has been developed. It is the operation assistance system depending on visual sense to indicate beam parameters to operators. First, the mental model of operators at the time of beam adjustment was analyzed, and it was presumed to be composed of five partial mental models, that is, beam behavior model, feasible setting region model, parameter sensitivity model, parameter interrelation model and status map model. Next, three visual interfaces were developed. Beam trajectory is rapidly calculated and graphically displayed whenever operators change parameters. Feasible setting regions (FSR) for parameters that satisfy the beam acceptance criteria of a cyclotron are indicated. The distribution of beam current values which are the quantity for evaluating adjustment is indicated as search history. Finally, for evaluating the system effectiveness, the search time required to reach the optimum conditions was measured. In addition, the system usability was evaluated by written questionnaires. The result of experiment showed the reduction of search time by about 65%. The written questionnaires survey showed the operators highly evaluate system usability. (K.I.)

  19. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    Science.gov (United States)

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  20. Integrative real-time geographic visualization of energy resources

    International Nuclear Information System (INIS)

    Sorokine, A.; Shankar, M.; Stovall, J.; Bhaduri, B.; King, T.; Fernandez, S.; Datar, N.; Omitaomu, O.

    2009-01-01

    'Full text:' Several models forecast that climatic changes will increase the frequency of disastrous events like droughts, hurricanes, and snow storms. Responding to these events and also to power outages caused by system errors such as the 2003 North American blackout require an interconnect-wide real-time monitoring system for various energy resources. Such a system should be capable of providing situational awareness to its users in the government and energy utilities by dynamically visualizing the status of the elements of the energy grid infrastructure and supply chain in geographic contexts. We demonstrate an approach that relies on Google Earth and similar standard-based platforms as client-side geographic viewers with a data-dependent server component. The users of the system can view status information in spatial and temporal contexts. These data can be integrated with a wide range of geographic sources including all standard Google Earth layers and a large number of energy and environmental data feeds. In addition, we show a real-time spatio-temporal data sharing capability across the users of the system, novel methods for visualizing dynamic network data, and a fine-grain access to very large multi-resolution geographic datasets for faster delivery of the data. The system can be extended to integrate contingency analysis results and other grid models to assess recovery and repair scenarios in the case of major disruption. (author)

  1. ATTENTIONAL NETWORKS AND SELECTIVE VISUAL SYSTEM

    Directory of Open Access Journals (Sweden)

    ALEJANDRO CASTILLO MORENO

    2006-05-01

    Full Text Available In this paper we checked the principal researches and theories to explain the attention system functioning.We are going to start reviewing along time about the concept of attention, from filter theories andresources distributor theories, to the current theories in which attention is conceived as a control system.From this last point of view, we will emphasize on the attentional networks theory of Posner, thatproposes different systems to explain diverse aspects of attention, but they are related to each other. Atlast in this paper, we will mention experimental results that have been important to characterize theselective attentional mechanisms of the human visual system, using the attentional spotlight model forthis aim.

  2. A proposed intracortical visual prosthesis image processing system.

    Science.gov (United States)

    Srivastava, N R; Troyk, P

    2005-01-01

    It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.

  3. Wearable Smart System for Visually Impaired People

    Directory of Open Access Journals (Sweden)

    Ali Jasim Ramadhan

    2018-03-01

    Full Text Available In this paper, we present a wearable smart system to help visually impaired persons (VIPs walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations on the wrist, which is helpful when the user has hearing loss or is in a noisy environment. In addition, the system alerts people in the surroundings when the user stumbles over or requires assistance, and the alert, along with the system location, is sent as a phone message to registered mobile phones of family members and caregivers. In addition, the registered phones can be used to retrieve the system location whenever required and activate real-time tracking of the VIP. We tested the system prototype and verified its functionality and effectiveness. The proposed system has more features than other similar systems. We expect it to be a useful tool to improve the quality of life of VIPs.

  4. Wearable Smart System for Visually Impaired People.

    Science.gov (United States)

    Ramadhan, Ali Jasim

    2018-03-13

    In this paper, we present a wearable smart system to help visually impaired persons (VIPs) walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations on the wrist, which is helpful when the user has hearing loss or is in a noisy environment. In addition, the system alerts people in the surroundings when the user stumbles over or requires assistance, and the alert, along with the system location, is sent as a phone message to registered mobile phones of family members and caregivers. In addition, the registered phones can be used to retrieve the system location whenever required and activate real-time tracking of the VIP. We tested the system prototype and verified its functionality and effectiveness. The proposed system has more features than other similar systems. We expect it to be a useful tool to improve the quality of life of VIPs.

  5. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  6. Real-time systems

    OpenAIRE

    Badr, Salah M.; Bruztman, Donald P.; Nelson, Michael L.; Byrnes, Ronald Benton

    1992-01-01

    This paper presents an introduction to the basic issues involved in real-time systems. Both real-time operating sys and real-time programming languages are explored. Concurrent programming and process synchronization and communication are also discussed. The real-time requirements of the Naval Postgraduate School Autonomous Under Vehicle (AUV) are then examined. Autonomous underwater vehicle (AUV), hard real-time system, real-time operating system, real-time programming language, real-time sy...

  7. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  8. Advanced Visualization Software System for Nuclear Power Plant Inspection

    International Nuclear Information System (INIS)

    Kukic, I.; Jambresic, D.; Reskovic, S.

    2006-01-01

    Visualization techniques have been widely used in industrial environment for enhancing process control. Traditional techniques of visualization are based on control panels with switches and lights, and 2D graphic representations of processes. However, modern visualization systems enable significant new opportunities in creating 3D virtual environments. These opportunities arise from the availability of high end graphics capabilities in low cost personal computers. In this paper we describe implementation of process visualization software, developed by INETEC. This software is used to visualize testing equipment, components being tested and the overall power plant inspection process. It improves security of the process due to its real-time visualization and collision detection capabilities, and therefore greatly enhances the inspection process. (author)

  9. Dynamical analysis and visualization of tornadoes time series.

    Directory of Open Access Journals (Sweden)

    António M Lopes

    Full Text Available In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  10. Dynamical analysis and visualization of tornadoes time series.

    Science.gov (United States)

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  11. Visual system manifestations of Alzheimer's disease.

    Science.gov (United States)

    Kusne, Yael; Wolf, Andrew B; Townley, Kate; Conway, Mandi; Peyman, Gholam A

    2017-12-01

    Alzheimer's disease (AD) is an increasingly common disease with massive personal and economic costs. While it has long been known that AD impacts the visual system, there has recently been an increased focus on understanding both pathophysiological mechanisms that may be shared between the eye and brain and how related biomarkers could be useful for AD diagnosis. Here, were review pertinent cellular and molecular mechanisms of AD pathophysiology, the presence of AD pathology in the visual system, associated functional changes, and potential development of diagnostic tools based on the visual system. Additionally, we discuss links between AD and visual disorders, including possible pathophysiological mechanisms and their relevance for improving our understanding of AD. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  12. Living Color Frame System: PC graphics tool for data visualization

    Science.gov (United States)

    Truong, Long V.

    1993-01-01

    Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.

  13. Five-dimensional ultrasound system for soft tissue visualization.

    Science.gov (United States)

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  14. Guidelines to Visualize Vessels in a Geographic Information System

    OpenAIRE

    Rodighiero, Dario

    2010-01-01

    In information systems the data representation covers a great importance. In fact the visualization of information is the last point of contact between the user and the information system. This is the space where the communication takes place. In real-time monitoring systems, this passage covers a great importance, especially for reasons related to the time and the transparency of relevant information. These factors are fundamental to vessel monitoring systems. This is the beginning where we ...

  15. Visualization framework for CAVE virtual reality systems

    OpenAIRE

    Kageyama, Akira; Tomiyama, Asako

    2016-01-01

    We have developed a software framework for scientific visualization in immersive-type, room-sized virtual reality (VR) systems, or Cave automatic virtual environment (CAVEs). This program, called Multiverse, allows users to select and invoke visualization programs without leaving CAVE’s VR space. Multiverse is a kind of immersive “desktop environment” for users, with a three-dimensional graphical user interface. For application developers, Multiverse is a software framework with useful class ...

  16. Visualization of the CMS python configuration system

    International Nuclear Information System (INIS)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J; Hegner, B; Hinzmann, A

    2010-01-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  17. Visualization of the CMS python configuration system

    Energy Technology Data Exchange (ETDEWEB)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J [RWTH Aachen University, Physikalisches Institut 3A, 52062 Aachen (Germany); Hegner, B [CERN, CH-1211 Geneva 23 (Switzerland); Hinzmann, A, E-mail: andreas.hinzmann@cern.c

    2010-04-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  18. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  19. Visual software system for memory interleaving simulation

    Directory of Open Access Journals (Sweden)

    Milenković Katarina

    2017-01-01

    Full Text Available This paper describes the visual software system for memory interleaving simulation (VSMIS, implemented for the purpose of the course Computer Architecture and Organization 1, at the School of Electrical Engineering, University of Belgrade. The simulator enables students to expand their knowledge through practical work in the laboratory, as well as through independent work at home. VSMIS gives users the possibility to initialize parts of the system and to control simulation steps. The user has the ability to monitor simulation through graphical representation. It is possible to navigate through the entire hierarchy of the system using simple navigation. During the simulation the user can observe and set the values of the memory location. At any time, the user can reset the simulation of the system and observe it for different memory states; in addition, it is possible to save the current state of the simulation and continue with the execution of the simulation later. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III44009

  20. Real-time visualization of joint cavitation.

    Directory of Open Access Journals (Sweden)

    Gregory N Kawchuk

    Full Text Available Cracking sounds emitted from human synovial joints have been attributed historically to the sudden collapse of a cavitation bubble formed as articular surfaces are separated. Unfortunately, bubble collapse as the source of joint cracking is inconsistent with many physical phenomena that define the joint cracking phenomenon. Here we present direct evidence from real-time magnetic resonance imaging that the mechanism of joint cracking is related to cavity formation rather than bubble collapse. In this study, ten metacarpophalangeal joints were studied by inserting the finger of interest into a flexible tube tightened around a length of cable used to provide long-axis traction. Before and after traction, static 3D T1-weighted magnetic resonance images were acquired. During traction, rapid cine magnetic resonance images were obtained from the joint midline at a rate of 3.2 frames per second until the cracking event occurred. As traction forces increased, real-time cine magnetic resonance imaging demonstrated rapid cavity inception at the time of joint separation and sound production after which the resulting cavity remained visible. Our results offer direct experimental evidence that joint cracking is associated with cavity inception rather than collapse of a pre-existing bubble. These observations are consistent with tribonucleation, a known process where opposing surfaces resist separation until a critical point where they then separate rapidly creating sustained gas cavities. Observed previously in vitro, this is the first in-vivo macroscopic demonstration of tribonucleation and as such, provides a new theoretical framework to investigate health outcomes associated with joint cracking.

  1. Monitoring external beam radiotherapy using real-time beam visualization

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, Cesare H. [Department of Mechanical Engineering and Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Naczynski, Dominik J.; Yu, Shu-Jung S.; Xing, Lei, E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2015-01-15

    Purpose: To characterize the performance of a novel radiation therapy monitoring technique that utilizes a flexible scintillating film, common optical detectors, and image processing algorithms for real-time beam visualization (RT-BV). Methods: Scintillating films were formed by mixing Gd{sub 2}O{sub 2}S:Tb (GOS) with silicone and casting the mixture at room temperature. The films were placed in the path of therapeutic beams generated by medical linear accelerators (LINAC). The emitted light was subsequently captured using a CMOS digital camera. Image processing algorithms were used to extract the intensity, shape, and location of the radiation field at various beam energies, dose rates, and collimator locations. The measurement results were compared with known collimator settings to validate the performance of the imaging system. Results: The RT-BV system achieved a sufficient contrast-to-noise ratio to enable real-time monitoring of the LINAC beam at 20 fps with normal ambient lighting in the LINAC room. The RT-BV system successfully identified collimator movements with sub-millimeter resolution. Conclusions: The RT-BV system is capable of localizing radiation therapy beams with sub-millimeter precision and tracking beam movement at video-rate exposure.

  2. Sunfall: a collaborative visual analytics system for astrophysics

    International Nuclear Information System (INIS)

    Aragon, C R; Bailey, S J; Poon, S; Runge, K; Thomas, R C

    2008-01-01

    Computational and experimental sciences produce and collect ever-larger and complex datasets, often in large-scale, multi-institution projects. The inability to gain insight into complex scientific phenomena using current software tools is a bottleneck facing virtually all endeavors of science. In this paper, we introduce Sunfall, a collaborative visual analytics system developed for the Nearby Supernova Factory, an international astrophysics experiment and the largest data volume supernova search currently in operation. Sunfall utilizes novel interactive visualization and analysis techniques to facilitate deeper scientific insight into complex, noisy, high-dimensional, high-volume, time-critical data. The system combines novel image processing algorithms, statistical analysis, and machine learning with highly interactive visual interfaces to enable collaborative, user-driven scientific exploration of supernova image and spectral data. Sunfall is currently in operation at the Nearby Supernova Factory; it is the first visual analytics system in production use at a major astrophysics project

  3. Sunfall: a collaborative visual analytics system for astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Aragon, Cecilia R.; Aragon, Cecilia R.; Bailey, Stephen J.; Poon, Sarah; Runge, Karl; Thomas, Rollin C.

    2008-07-07

    Computational and experimental sciences produce and collect ever-larger and complex datasets, often in large-scale, multi-institution projects. The inability to gain insight into complex scientific phenomena using current software tools is a bottleneck facing virtually all endeavors of science. In this paper, we introduce Sunfall, a collaborative visual analytics system developed for the Nearby Supernova Factory, an international astrophysics experiment and the largest data volume supernova search currently in operation. Sunfall utilizes novel interactive visualization and analysis techniques to facilitate deeper scientific insight into complex, noisy, high-dimensional, high-volume, time-critical data. The system combines novel image processing algorithms, statistical analysis, and machine learning with highly interactive visual interfaces to enable collaborative, user-driven scientific exploration of supernova image and spectral data. Sunfall is currently in operation at the Nearby Supernova Factory; it is the first visual analytics system in production use at a major astrophysics project.

  4. Sunfall: a collaborative visual analytics system for astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Aragon, C R; Bailey, S J; Poon, S; Runge, K; Thomas, R C [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)], E-mail: CRAragon@lbl.gov

    2008-07-15

    Computational and experimental sciences produce and collect ever-larger and complex datasets, often in large-scale, multi-institution projects. The inability to gain insight into complex scientific phenomena using current software tools is a bottleneck facing virtually all endeavors of science. In this paper, we introduce Sunfall, a collaborative visual analytics system developed for the Nearby Supernova Factory, an international astrophysics experiment and the largest data volume supernova search currently in operation. Sunfall utilizes novel interactive visualization and analysis techniques to facilitate deeper scientific insight into complex, noisy, high-dimensional, high-volume, time-critical data. The system combines novel image processing algorithms, statistical analysis, and machine learning with highly interactive visual interfaces to enable collaborative, user-driven scientific exploration of supernova image and spectral data. Sunfall is currently in operation at the Nearby Supernova Factory; it is the first visual analytics system in production use at a major astrophysics project.

  5. Visualization of thermal management system in space using neutron radiography

    International Nuclear Information System (INIS)

    Nakazawa, Takeshi

    1995-01-01

    The visualizing technique by neutron radiography is effective for visualizing liquid in metals, and the applications in wide fields have been reported. In this paper, as one of the examples of applying the visualizing technique by neutron radiography, the experiment of visualizing the two-phase fluid loop heat removal system for the purpose of using in spatial environment was carried out, and its results are reported. For future large scale space ships and space stations, the heat removal system with two-phase fluid loop which utilizes the phase transformation of heat transport media is regarded as promising. By this system, good heat transfer performance is obtained, transported heat quantity per unit mass of media increases, and pumping power and the weight of the total system are reduced. Temperature can be controlled by system pressure. The two-phase fluid loop for the visualization experiment and the experimental results are reported. By the experiment using the real time NRG system at the JRR-3M, the boiling and evaporation phenomena in the capillary heat transfer tubes were able to be visualized. (K.I.)

  6. Development of the real time monitor system

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Katsumi [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan); Watanabe, Tadashi; Kaburaki, Hideo

    1996-10-01

    Large-scale simulation technique is studied at the Center for Promotion of Computational Science and Engineering (CCSE) for the computational science research in nuclear fields. Visualization and animation processing technique are studied and developed for efficient understanding of simulation results. The real time monitor system, in which on-going simulation results are transferred from a supercomputer or workstation to a graphic workstation and are visualized and recorded, is described in this report. This system is composed of the graphic workstation and the video equipment connected to the network. The control shell programs are the job-execution shell for simulations on supercomputers, the file-transfer shell for output files for visualization, and the shell for starting visualization tools. Special image processing technique and hardware are not necessary in this system and the standard visualization tool AVS and the UNIX commands are used, so that this system can be implemented and applied in various computer environments. (author)

  7. Timing system for PLS

    International Nuclear Information System (INIS)

    Chang, S.S.; Kim, M.S.; Won, S.C.; Choi, S.J.

    1991-01-01

    The PLS timing system consists of a master oscillator, a repetition rate pulse generator, a storage ring rf synchronizing system, and a rf driver and kicker trigger system composed of a fixed delay module and variable delay modules. All the timing modules are installed in the VME crates and controlled by the 32 bit microprocessors, and communicating with the Host computer via Ethernet. This paper describes the architectural design of this system as well as the requirements of performance

  8. A Visual Formalism for Interacting Systems

    Directory of Open Access Journals (Sweden)

    Paul C. Jorgensen

    2015-04-01

    Full Text Available Interacting systems are increasingly common. Many examples pervade our everyday lives: automobiles, aircraft, defense systems, telephone switching systems, financial systems, national governments, and so on. Closer to computer science, embedded systems and Systems of Systems are further examples of interacting systems. Common to all of these is that some "whole" is made up of constituent parts, and these parts interact with each other. By design, these interactions are intentional, but it is the unintended interactions that are problematic. The Systems of Systems literature uses the terms "constituent systems" and "constituents" to refer to systems that interact with each other. That practice is followed here. This paper presents a visual formalism, Swim Lane Event-Driven Petri Nets, that is proposed as a basis for Model-Based Testing (MBT of interacting systems. In the absence of available tools, this model can only support the offline form of Model-Based Testing.

  9. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  10. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  11. Timing the impact of literacy on visual processing

    Science.gov (United States)

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  12. Art, illusion and the visual system.

    Science.gov (United States)

    Livingstone, M S

    1988-01-01

    The verve of op art, the serenity of a pointillist painting and the 3-D puzzlement of an Escher print derive from the interplay of the art with the anatomy of the visual system. Color, shape and movement are each processed separately by different structures in the eye and brain and then are combined to produce the experience we call perception.

  13. Real time expert systems

    International Nuclear Information System (INIS)

    Asami, Tohru; Hashimoto, Kazuo; Yamamoto, Seiichi

    1992-01-01

    Recently, aiming at the application to the plant control for nuclear reactors and traffic and communication control, the research and the practical use of the expert system suitable to real time processing have become conspicuous. In this report, the condition for the required function to control the object that dynamically changes within a limited time is presented, and the technical difference between the real time expert system developed so as to satisfy it and the expert system of conventional type is explained with the actual examples and from theoretical aspect. The expert system of conventional type has the technical base in the problem-solving equipment originating in STRIPS. The real time expert system is applied to the fields accompanied by surveillance and control, to which conventional expert system is hard to be applied. The requirement for the real time expert system, the example of the real time expert system, and as the techniques of realizing real time processing, the realization of interruption processing, dispersion processing, and the mechanism of maintaining the consistency of knowledge are explained. (K.I.)

  14. The TRISTAN timing system

    International Nuclear Information System (INIS)

    Urakawa, Junji; Ishii, Kazuhiro; Kadokura, Eiichi; Kawamoto, Takashi; Kikuchi, Mitsuo; Kikutani, Eiji

    1990-01-01

    The TRISTAN accelerator complex comprises four accelerators: a 200 MeV electron linac for positron production, a 2.5 GeV linac, an 8 GeV accumulation ring (AR) and a 30 GeV main ring (MR). The TRISTAN timing system is divided into fast and slow timing systems. The fast timing system supplies timing signals (fast timing) for devices whose operation is synchronized with bunched beams from either the linac or the AR. These signals are also used in various beam monitors and beam feedback systems. The slow timing system generates trigger signals (slow timing) in order to achieve synchronization between the magnetic field and the rf accelerating voltage of the AR or MR. These triggers are also used for the automatic operation of machines. The TRISTAN timing system fulfills the following features with the required flexibility and extensibility while in the operation mode: (1) the linac gun trigger signals and the AR revolution clock are synchronized within ≅ 100 ps in timing accuracy, and a short pulse (≅ 1.5 ns) from the linac is injected and accumulated into an arbitrarily selected bucket of AR for a long time; (2) bucket matching between the AR and MR is achieved within ±6 ps in timing accuracy and a single bunched beam from the AR is injected into an arbitrarily selected bucket of the MR; (3) the slow timing system manages the operation mode of the AR and MR with both flexibility and extensibility; (4) the synchronization signals are transmitted through coaxial cables over a circumference of 3 km from the main control room. (orig.)

  15. Dependency visualization for complex system understanding

    Energy Technology Data Exchange (ETDEWEB)

    Smart, J. Allison Cory [Univ. of California, Davis, CA (United States)

    1994-09-01

    With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impaired as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.

  16. Visualizing systems engineering data with Java

    International Nuclear Information System (INIS)

    Barter, R; Vinzant, A.

    1998-01-01

    Systems Engineers are required to deal with complex sets of data. To be useful, the data must be managed effectively, and presented in meaningful terms to a wide variety of information consumers. Two software patterns are presented as the basis for exploring the visualization of systems engineering data. The Model, View, Controller pattern defines an information management system architecture. The Entity, Relation, Attribute pattern defines the information model. MVC Views then form the basis for the user interface between the information consumer and the MVC Controller/Model combination. A Java tool set is described for exploring alternative views into the underlying complex data structures encountered in systems engineering

  17. Accelerator-timing system

    International Nuclear Information System (INIS)

    Timmer, E.; Heine, E.

    1985-01-01

    Along the NIKHEF accelerator in Amsterdam (Netherlands), at several places a signal is needed for the sychronisation of all devices with the acceleration process. In this report, basic principles and arrangements of this timing system are described

  18. How the visual brain encodes and keeps track of time.

    Science.gov (United States)

    Salvioni, Paolo; Murray, Micah M; Kalmbach, Lysiann; Bueti, Domenica

    2013-07-24

    Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.

  19. Time-sharing visual and auditory tracking tasks

    Science.gov (United States)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  20. Discrete-Time Systems

    Indian Academy of Sciences (India)

    We also describe discrete-time systems in terms of difference ... A more modern alternative, especially for larger systems, is to convert ... In other words, ..... picture?) State-variable equations are also called state-space equations because the ...

  1. EmailTime: visual analytics and statistics for temporal email

    Science.gov (United States)

    Erfani Joorabchi, Minoo; Yim, Ji-Dong; Shaw, Christopher D.

    2011-01-01

    Although the discovery and analysis of communication patterns in large and complex email datasets are difficult tasks, they can be a valuable source of information. We present EmailTime, a visual analysis tool of email correspondence patterns over the course of time that interactively portrays personal and interpersonal networks using the correspondence in the email dataset. Our approach is to put time as a primary variable of interest, and plot emails along a time line. EmailTime helps email dataset explorers interpret archived messages by providing zooming, panning, filtering and highlighting etc. To support analysis, it also measures and visualizes histograms, graph centrality and frequency on the communication graph that can be induced from the email collection. This paper describes EmailTime's capabilities, along with a large case study with Enron email dataset to explore the behaviors of email users within different organizational positions from January 2000 to December 2001. We defined email behavior as the email activity level of people regarding a series of measured metrics e.g. sent and received emails, numbers of email addresses, etc. These metrics were calculated through EmailTime. Results showed specific patterns in the use email within different organizational positions. We suggest that integrating both statistics and visualizations in order to display information about the email datasets may simplify its evaluation.

  2. Visualizations of Travel Time Performance Based on Vehicle Reidentification Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley Ernest [National Renewable Energy Lab, 15013 Denver West Parkway, Golden, CO 80401; Sharifi, Elham [Center for Advanced Transportation Technology, University of Maryland, College Park, Technology Ventures Building, Suite 2200, 5000 College Avenue, College Park, MD 20742; Day, Christopher M. [Joint Transportation Research Program, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906; Bullock, Darcy M. [Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47906

    2017-01-01

    This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from various time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.

  3. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  4. Wearable Smart System for Visually Impaired People

    OpenAIRE

    Ali Jasim Ramadhan

    2018-01-01

    In this paper, we present a wearable smart system to help visually impaired persons (VIPs) walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations o...

  5. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  6. An approach for the development of visual configuration systems

    DEFF Research Database (Denmark)

    Hvam, Lars; Ladeby, Klaes Rohde

    2007-01-01

    How can a visual configuration system be developed to support the specification process' in companies that manufacture customer tailored products? This article focuses on how visual configuration systems can be developed. The approach for developing visual configuration systems has been developed...... by Centre for Product Modelling (CPM) at The Technical University of Denmark. The approach is based on experiences from a visualization project in co-operation between CPM and the global provider of power protection American Power Conversion (APC). The visual configuration system was developed in 2001...... of the product in the visual configuration system....

  7. Visual perception system and method for a humanoid robot

    Science.gov (United States)

    Wells, James W. (Inventor); Mc Kay, Neil David (Inventor); Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  8. Development of a guiding system and visual feedback real-time controller for the high-speed self-align optical cable winding

    International Nuclear Information System (INIS)

    Lee, Chang Woo; Kang, Hyun Kyoo; Shin, Kee Hyun

    2008-01-01

    Recently, the demand for the optical cable has been rapidly growing because of the increasing number of internet users and the high speed internet data transmission required. But the present optical cable winding systems have some serious problems such as pile-up and collapse of cables usually near the flange of the bobbin in the process of cables winding. To reduce the pile-up collapse in cable winding systems, a new guiding system is developed for a high-speed self-align cable winding. First, mathematical models for the winding process and bobbin shape fault compensation were proposed, the winding mechanism was analyzed and synchronization logics for the motions of winding, traversing, and the guiding were created. A prototype cable winding systems was manufactured to validate the new guiding system and the suggested logic. Experiment results showed that the winding system with the developed guiding system outperformed the system without the guiding system in reducing pile-up and collapse in high-speed winding

  9. Radiation Counting System Software Using Visual Basic

    International Nuclear Information System (INIS)

    Nanda Nagara; Didi Gayani

    2009-01-01

    It has been created a Gamma Radiation Counting System using interface card, which paired with Personal Computer (PC) and operated by the Visual Basic program. The program was set through varied menu selections such as ”Multi Counting” , ”Counting and Record” and ”View Data”. An interface card for data acquisition was formed by using AMD9513 components as a counter and timer which can be programmed. This counting system was tested and used in waste facility in PTNBR and the result is quite good. (author)

  10. Time reversal communication system

    Science.gov (United States)

    Candy, James V.; Meyer, Alan W.

    2008-12-02

    A system of transmitting a signal through a channel medium comprises digitizing the signal, time-reversing the digitized signal, and transmitting the signal through the channel medium. The channel medium may be air, earth, water, tissue, metal, and/or non-metal.

  11. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    Science.gov (United States)

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2017-06-01

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  12. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy.

    Science.gov (United States)

    Chang, Wen-Chung; Chen, Chin-Sheng; Tai, Hung-Chi; Liu, Chia-Yuan; Chen, Yu-Jen

    2014-01-01

    The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV) in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US) and computed tomography (CT) scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy.

  13. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    Science.gov (United States)

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  14. Alert system for students with visual disabilities at the UTM

    Directory of Open Access Journals (Sweden)

    Marely del Rosario Cruz Felipe

    2018-01-01

    Full Text Available In the transfer of students with visual disabilities at the Technical University of Manabí (UTM accidents have been reported when going through some ramps and other obstacles, especially on rainy days. This article belongs to an investigation into the realization of an alert system for students with visual disabilities. The objective of the implementation of this system is to guide students with visual disabilities on different obstacles that exist in their transfer through the university. To carry out the implementation of this system, the alert systems and the technologies that are currently used as a result of a recording studio in the national and international scope were analyzed, the tools and technologies used in the developed solution are described. (Definition, technologies for the change of people, software, programming languages, etc. that allowed an efficient implementation in a short time of the proposed system by means of RFID (Radio Frequency Identification technology. The above is reflected in the positive orientation for the transfer of 32 students with visual disabilities through the university and by those who have contributed to improving their quality of life.

  15. On the time required for identification of visual objects

    DEFF Research Database (Denmark)

    Petersen, Anders

    The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class of exper......The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class...... of experiments with the help of the theory of visual attention – have proven to be an effective approach to examine cognitive parameters that are essential for a broad range of different patient groups. The theory of visual attention relies on a psychometric function that describes the ability to identify......, with the dataset that we collected, to directly analyse how confusability develops as a certain letter is exposed for increasingly longer time. An important scientific question is what shapes the psychometric function. It is conceivable that the function reflects both limitations and structure of the physical...

  16. Plasticity in the Drosophila larval visual System

    Directory of Open Access Journals (Sweden)

    Abud J Farca-Luna

    2013-07-01

    Full Text Available The remarkable ability of the nervous system to modify its structure and function is mostly experience and activity modulated. The molecular basis of neuronal plasticity has been studied in higher behavioral processes, such as learning and memory formation. However, neuronal plasticity is not restricted to higher brain functions, but may provide a basic feature of adaptation of all neural circuits. The fruit fly Drosophila melanogaster provides a powerful genetic model to gain insight into the molecular basis of nervous system development and function. The nervous system of the larvae is again a magnitude simpler than its adult counter part, allowing the genetic assessment of a number of individual genetically identifiable neurons. We review here recent progress on the genetic basis of neuronal plasticity in developing and functioning neural circuits focusing on the simple visual system of the Drosophila larva.

  17. Visualizing time: how linguistic metaphors are incorporated into displaying instruments in the process of interpreting time-varying signals

    Science.gov (United States)

    Garcia-Belmonte, Germà

    2017-06-01

    Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor

  18. New apparatus of single particle trap system for aerosol visualization

    Science.gov (United States)

    Higashi, Hidenori; Fujioka, Tomomi; Endo, Tetsuo; Kitayama, Chiho; Seto, Takafumi; Otani, Yoshio

    2014-08-01

    Control of transport and deposition of charged aerosol particles is important in various manufacturing processes. Aerosol visualization is an effective method to directly observe light scattering signal from laser-irradiated single aerosol particle trapped in a visualization cell. New single particle trap system triggered by light scattering pulse signal was developed in this study. The performance of the device was evaluated experimentally. Experimental setup consisted of an aerosol generator, a differential mobility analyzer (DMA), an optical particle counter (OPC) and the single particle trap system. Polystylene latex standard (PSL) particles (0.5, 1.0 and 2.0 μm) were generated and classified according to the charge by the DMA. Singly charged 0.5 and 1.0 μm particles and doubly charged 2.0 μm particles were used as test particles. The single particle trap system was composed of a light scattering signal detector and a visualization cell. When the particle passed through the detector, trigger signal with a given delay time sent to the solenoid valves upstream and downstream of the visualization cell for trapping the particle in the visualization cell. The motion of particle in the visualization cell was monitored by CCD camera and the gravitational settling velocity and the electrostatic migration velocity were measured from the video image. The aerodynamic diameter obtained from the settling velocity was in good agreement with Stokes diameter calculated from the electrostatic migration velocity for individual particles. It was also found that the aerodynamic diameter obtained from the settling velocity was a one-to-one function of the scattered light intensity of individual particles. The applicability of this system will be discussed.

  19. A novel visual pipework inspection system

    Science.gov (United States)

    Summan, Rahul; Jackson, William; Dobie, Gordon; MacLeod, Charles; Mineo, Carmelo; West, Graeme; Offin, Douglas; Bolton, Gary; Marshall, Stephen; Lille, Alexandre

    2018-04-01

    The interior visual inspection of pipelines in the nuclear industry is a safety critical activity conducted during outages to ensure the continued safe and reliable operation of plant. Typically, the video output by a manually deployed probe is viewed by an operator looking to identify and localize surface defects such as corrosion, erosion and pitting. However, it is very challenging to estimate the nature and extent of defects by viewing a large structure through a relatively small field of view. This work describes a new visual inspection system employing photogrammetry using a fisheye camera and a structured light system to map the internal geometry of pipelines by generating a photorealistic, geometrically accurate surface model. The error of the system output was evaluated through comparison to a ground truth laser scan (ATOS GOM Triple Scan) of a nuclear grade split pipe sample (stainless steel 304L, 80mm internal diameter) containing defects representative of the application - the error was found to be submillimeter across the sample.

  20. JKJ accelerator timing system

    International Nuclear Information System (INIS)

    Ohmori, C.; Mori, Y.; Yoshii, M.; Yamamoto, M.

    2001-01-01

    The JKJ (JAERl-KEK Joint Project) accelerator complex consists of the linear accelerator, 3 GeV and 50 GeV synchrotrons. To minimize the beam loss during the beam transfer from the 3 GeV synchrotron to the 50 GeV one, the synchronization of the two RF system of the rings is very important. To reduce the background from the high and low momentum neutron, the neutron beam chopper will be employed. The 3 GeV RF will be also synchronized to the chopper timing when the beam goes to the neutron facility. The whole timing control system of these accelerators and chopper will be described. (author)

  1. Dashboard visualizations: Supporting real-time throughput decision-making.

    Science.gov (United States)

    Franklin, Amy; Gantela, Swaroop; Shifarraw, Salsawit; Johnson, Todd R; Robinson, David J; King, Brent R; Mehta, Amit M; Maddow, Charles L; Hoot, Nathan R; Nguyen, Vickie; Rubio, Adriana; Zhang, Jiajie; Okafor, Nnaemeka G

    2017-07-01

    Providing timely and effective care in the emergency department (ED) requires the management of individual patients as well as the flow and demands of the entire department. Strategic changes to work processes, such as adding a flow coordination nurse or a physician in triage, have demonstrated improvements in throughput times. However, such global strategic changes do not address the real-time, often opportunistic workflow decisions of individual clinicians in the ED. We believe that real-time representation of the status of the entire emergency department and each patient within it through information visualizations will better support clinical decision-making in-the-moment and provide for rapid intervention to improve ED flow. This notion is based on previous work where we found that clinicians' workflow decisions were often based on an in-the-moment local perspective, rather than a global perspective. Here, we discuss the challenges of designing and implementing visualizations for ED through a discussion of the development of our prototype Throughput Dashboard and the potential it holds for supporting real-time decision-making. Copyright © 2017. Published by Elsevier Inc.

  2. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  3. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  4. Dividing time: Concurrent timing of auditory and visual events by young and elderly adults

    OpenAIRE

    McAuley, J. Devin; Miller, Jonathan P.; Wang, Mo; Pang, Kevin C. H.

    2010-01-01

    This article examines age differences in individual’s ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults in contrast showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory ta...

  5. The Web system of visualization and analysis equipped with reproducibility

    International Nuclear Information System (INIS)

    Ueshima, Yutaka; Saito, Kanji; Takeda, Yasuhiro; Nakai, Youichi; Hayashi, Sachiko

    2005-01-01

    In the advanced photon experimental research, real-time visualization and steering system is thought as desirable method of data analysis. This approach is valid only in the fixed analysis at one time or in the easily reproducible experiment. But, in the research for an unknown problem like the advanced photon experimental research, it is necessary that the observation data can be analyzed many times because profitable analysis is difficult at the first time. Consequently, output data should be filed to refer and analyze at any time. To support the research, we need the followed automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be integrated database system with several functional servers distributed on the network. (author)

  6. An Indoor Navigation System for the Visually Impaired

    Directory of Open Access Journals (Sweden)

    Luis A. Guerrero

    2012-06-01

    Full Text Available Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user’s trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  7. An indoor navigation system for the visually impaired.

    Science.gov (United States)

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  8. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy

    Directory of Open Access Journals (Sweden)

    Chang WC

    2014-06-01

    Full Text Available Wen-Chung Chang,1,* Chin-Sheng Chen,2,* Hung-Chi Tai,3 Chia-Yuan Liu,4,5 Yu-Jen Chen3 1Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan; 2Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei, Taiwan; 3Department of Radiation Oncology, Mackay Memorial Hospital, Taipei, Taiwan; 4Department of Internal Medicine, Mackay Memorial Hospital, Taipei, Taiwan; 5Department of Medicine, Mackay Medical College, New Taipei City, Taiwan  *These authors contributed equally to this work Abstract: The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US and computed tomography (CT scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy. Keywords: ultrasound, computerized tomography

  9. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  10. Visualization system on the earth simulator user's guide

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Sai, Kazunori

    2002-08-01

    A visualization system on the Earth Simulator is developed. The system enables users to see a graphic representation of simulation results on a client terminal simultaneously with them being computed on the Earth Simulator. Moreover, the system makes it possible to change parameters of the calculation and its visualization in the middle of calculation. The graphical user interface (GUI) of the system is constructed on a Java applet. Consequently, the client only needs a web browser, so it is independent of operating systems. The system consists of a server function, post-processing function and client function. The server and post-processing functions work on the Earth Simulator, and the client function works on the client terminal. The server function employs a library style format so that users can easily invoke real-time visualization functions by applying their code. The post-processing function employs a library style format and moreover provides a load module. This report describes mainly the usage of the server and post-processing functions. (author)

  11. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    Science.gov (United States)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  12. Transition Icons for Time-Series Visualization and Exploratory Analysis.

    Science.gov (United States)

    Nickerson, Paul V; Baharloo, Raheleh; Wanigatunga, Amal A; Manini, Todd M; Tighe, Patrick J; Rashidi, Parisa

    2018-03-01

    The modern healthcare landscape has seen the rapid emergence of techniques and devices that temporally monitor and record physiological signals. The prevalence of time-series data within the healthcare field necessitates the development of methods that can analyze the data in order to draw meaningful conclusions. Time-series behavior is notoriously difficult to intuitively understand due to its intrinsic high-dimensionality, which is compounded in the case of analyzing groups of time series collected from different patients. Our framework, which we call transition icons, renders common patterns in a visual format useful for understanding the shared behavior within groups of time series. Transition icons are adept at detecting and displaying subtle differences and similarities, e.g., between measurements taken from patients receiving different treatment strategies or stratified by demographics. We introduce various methods that collectively allow for exploratory analysis of groups of time series, while being free of distribution assumptions and including simple heuristics for parameter determination. Our technique extracts discrete transition patterns from symbolic aggregate approXimation representations, and compiles transition frequencies into a bag of patterns constructed for each group. These transition frequencies are normalized and aligned in icon form to intuitively display the underlying patterns. We demonstrate the transition icon technique for two time-series datasets-postoperative pain scores, and hip-worn accelerometer activity counts. We believe transition icons can be an important tool for researchers approaching time-series data, as they give rich and intuitive information about collective time-series behaviors.

  13. The time slice system

    International Nuclear Information System (INIS)

    DeWitt, J.

    1990-01-01

    We have designed a fast readout system for silicon microstrip detectors which could be used at HERA, LHC, and SSC. The system consists of an analog amplifier-comparator chip (AACC) and a digital time slice chip (DTSC). The analog ship is designed in dielectric isolated bipolar technology for low noise and potential radiation hardness. The DTSC is built in CMOS for low power use and high circuit density. The main implementation aims are low power consumption and compactness. The architectural goal is automatic data reduction, and ease of external interface. The pipelining of event information is done digitally in the DTSC. It has a 64 word deep level 1 buffer acting as a FIFO, and a 16 word deep level 2 buffer acting as a dequeue. The DTSC also includes an asynchronous bus interface. We are first building a scaled up (100 μm instead of 25 μm pitch) and slower (10 MHz instead of 60 MHz) version in 2 μm CMOS and plan to test the principle of operation of this system in the Leading Proton Spectrometer (LPS) of the ZEUS detector at HERA. Another very important development will be tested there: the radiation hardening of the chips. We have started a collaboration with a rad-hard foundry and with Los Alamos National Laboratories to test and evaluate rad-hard processes and the final rad-hard product. Initial data are very promising, because radiation resistance of up to many Mrad have been achieved. (orig.)

  14. Fuel visual inspection system of the RTMIII

    International Nuclear Information System (INIS)

    Delfin L, A.; Castaneda J, G.; Mazon R, R.; Aguilar H, F.

    2007-01-01

    The International Atomic Energy Agency (IAEA) through the RLA/04/18 project, Management of Irradiated Fuel in Research Reactors, it recommended among other that the participant countries (Brazil, Argentina, Chile, Peru and Mexico), develop tools to assure the integrity of the nuclear fuels used in the research reactors. The TRIGA Mark lll reactor (RTMIII) of the ININ, designed and built a system of visual inspection, that it uses a high radiation camera and image digitalisation. The project considers safety conditions of the personnel that carried out the activities of visual inspection, for that which the tool dives in the pool of the RTMIII, being held by an end in the superior part of the aluminium liner of the Reactor like it is shown in the plane No. 1. The primordial unit of the system is the visual equipment that corresponds to a camera of the Hydro-Technologie (HYTEC) VSLT 410N mark, designed to work in atmospheres under the water and/or in places of high risk. The camera has an unit of motorized orientation of stainless steel that can be rotated unboundedly in both senses, with variable speed by means of a control lever from the control unit. Together to this orientation unit is found the camera head, the one which is contained in an unit of motorized inclination of stainless steel that can be rotated azimuthally up to 370 degrees in both senses. The operation conditions of the camera are the following ones, temperature: 0 to 50 C, dose speed: ≤ 50 rad/h, operation depth: ≤ 30 mts, humidity (control unit): ≤ 80%. From the control unit it is derived an external device plug-n-play TV-Usb Aver Media marks whose function is to decode the video signal sent by the control unit and to transmit it to the computer where the image is captured in picture or video that is analyzed later on with any software ad hoc, that in our case we use the Quantikov Image Analyzer program for Windows 98 of the Dr. Lucio C. M. Pinto from Brazil who participates in the RLA/04

  15. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    Science.gov (United States)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  16. New classification system-based visual outcome in Eales′ disease

    Directory of Open Access Journals (Sweden)

    Saxena Sandeep

    2007-01-01

    Full Text Available Purpose: A retrospective tertiary care center-based study was undertaken to evaluate the visual outcome in Eales′ disease, based on a new classification system, for the first time. Materials and Methods: One hundred and fifty-nine consecutive cases of Eales′ disease were included. All the eyes were staged according to the new classification: Stage 1: periphlebitis of small (1a and large (1b caliber vessels with superficial retinal hemorrhages; Stage 2a: capillary non-perfusion, 2b: neovascularization elsewhere/of the disc; Stage 3a: fibrovascular proliferation, 3b: vitreous hemorrhage; Stage 4a: traction/combined rhegmatogenous retinal detachment and 4b: rubeosis iridis, neovascular glaucoma, complicated cataract and optic atrophy. Visual acuity was graded as: Grade I 20/20 or better; Grade II 20/30 to 20/40; Grade III 20/60 to 20/120 and Grade IV 20/200 or worse. All the cases were managed by medical therapy, photocoagulation and/or vitreoretinal surgery. Visual acuity was converted into decimal scale, denoting 20/20=1 and 20/800=0.01. Paired t-test / Wilcoxon signed-rank tests were used for statistical analysis. Results: Vitreous hemorrhage was the commonest presenting feature (49.32%. Cases with Stages 1 to 3 and 4a and 4b achieved final visual acuity ranging from 20/15 to 20/40; 20/80 to 20/400 and 20/200 to 20/400, respectively. Statistically significant improvement in visual acuities was observed in all the stages of the disease except Stages 1a and 4b. Conclusion: Significant improvement in visual acuities was observed in the majority of stages of Eales′ disease following treatment. This study adds further to the little available evidences of treatment effects in literature and may have effect on patient care and health policy in Eales′ disease.

  17. Sensory system plasticity in a visually specialized, nocturnal spider.

    Science.gov (United States)

    Stafstrom, Jay A; Michalik, Peter; Hebets, Eileen A

    2017-04-21

    The interplay between an animal's environmental niche and its behavior can influence the evolutionary form and function of its sensory systems. While intraspecific variation in sensory systems has been documented across distant taxa, fewer studies have investigated how changes in behavior might relate to plasticity in sensory systems across developmental time. To investigate the relationships among behavior, peripheral sensory structures, and central processing regions in the brain, we take advantage of a dramatic within-species shift of behavior in a nocturnal, net-casting spider (Deinopis spinosa), where males cease visually-mediated foraging upon maturation. We compared eye diameters and brain region volumes across sex and life stage, the latter through micro-computed X-ray tomography. We show that mature males possess altered peripheral visual morphology when compared to their juvenile counterparts, as well as juvenile and mature females. Matching peripheral sensory structure modifications, we uncovered differences in relative investment in both lower-order and higher-order processing regions in the brain responsible for visual processing. Our study provides evidence for sensory system plasticity when individuals dramatically change behavior across life stages, uncovering new avenues of inquiry focusing on altered reliance of specific sensory information when entering a new behavioral niche.

  18. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  19. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  20. Evaluating the Cognitive Aspects of User Interaction with 2D Visual Tagging Systems

    Directory of Open Access Journals (Sweden)

    Samuel Olugbenga King

    2008-04-01

    Full Text Available There has been significant interest in thedevelopment and deployment of visual taggingapplications in recent times. But user perceptions aboutthe purpose and function of visual tagging systems havenot received much attention. This paper presents a userexperience study that investigates the cognitive modelsthat novice users have about interacting with visualtagging applications. The results of the study show thatalthough most users are unfamiliar with visual taggingtechnologies, they could accurately predict the purposeand mode of retrieval of data stored in visual tags. Thestudy concludes with suggestions on how to improve therecognition, ease of recall and design of visual tags.

  1. LOFT data acquisition and visual display system (DAVDS) presentation program

    International Nuclear Information System (INIS)

    Bullock, M.G.; Miyasaki, F.S.

    1976-03-01

    The Data Acquisition and Visual Display System (DAVDS) at the Loss-of-Fluid Test Facility (LOFT) has 742 data channel recording capability of which 576 are recorded digitally. The purpose of this computer program is to graphically present the data acquired and/or processed by the LOFT DAVDS. This program takes specially created plot data buffers of up to 1024 words and generates time history plots on the system electrostatic printer-plotter. The data can be extracted from two system input devices: Magnetic disk or digital magnetic tape. Versatility has been designed in the program by providing the user three methods of scaling plots: Automatic, control record, and manual. Time required to produce a plot on the system electrostatic printer-plotter varies from 30 to 90 seconds depending on the options selected. The basic computer and program details are described

  2. Visual and auditory reaction time for air traffic controllers using quantitative electroencephalograph (QEEG) data.

    Science.gov (United States)

    Abbass, Hussein A; Tang, Jiangjun; Ellejmi, Mohamed; Kirby, Stephen

    2014-12-01

    The use of quantitative electroencephalograph in the analysis of air traffic controllers' performance can reveal with a high temporal resolution those mental responses associated with different task demands. To understand the relationship between visual and auditory correct responses, reaction time, and the corresponding brain areas and functions, air traffic controllers were given an integrated visual and auditory continuous reaction task. Strong correlations were found between correct responses to the visual target and the theta band in the frontal lobe, the total power in the medial of the parietal lobe and the theta-to-beta ratio in the left side of the occipital lobe. Incorrect visual responses triggered activations in additional bands including the alpha band in the medial of the frontal and parietal lobes, and the Sensorimotor Rhythm in the medial of the parietal lobe. Controllers' responses to visual cues were found to be more accurate but slower than their corresponding performance on auditory cues. These results suggest that controllers are more susceptible to overload when more visual cues are used in the air traffic control system, and more errors are pruned as more auditory cues are used. Therefore, workload studies should be carried out to assess the usefulness of additional cues and their interactions with the air traffic control environment.

  3. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  4. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    Science.gov (United States)

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  5. Timing system observations

    International Nuclear Information System (INIS)

    Winans, J.

    1994-01-01

    The purpose of this document is to augment Synchronized Time Stamp Support authored by Jim Kowalkowski. This document provides additional documentation to clarify and explain software involved in timing operations of the accelerator

  6. A survey of visualization systems for network security.

    Science.gov (United States)

    Shiravi, Hadi; Shiravi, Ali; Ghorbani, Ali A

    2012-08-01

    Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.

  7. Effect of Colour of Object on Simple Visual Reaction Time in Normal Subjects

    Directory of Open Access Journals (Sweden)

    Sunita B. Kalyanshetti

    2014-01-01

    Full Text Available The measure of simple reaction time has been used to evaluate the processing speed of CNS and the co-ordination between the sensory and motor systems. As the reaction time is influenced by different factors; the impact of colour of objects in modulating the reaction time has been investigated in this study. 200 healthy volunteers (female gender 100 and male gender100 of age group 18-25 yrs were included as subjects. The subjects were presented with two visual stimuli viz; red and green light by using an electronic response analyzer. Paired‘t’ test for comparison of visual reaction time for red and green colour in male gender shows p value<0.05 whereas in female gender shows p<0.001. It was observed that response latency for red colour was lesser than that of green colour which can be explained on the basis of trichromatic theory.

  8. Visual application of the American Board of Orthodontics Grading System.

    Science.gov (United States)

    Scott, Steven A; Freer, Terry J

    2005-05-01

    Assessment of treatment outcomes has traditionally been accomplished using the subjective opinion of experienced clinicians. Reduced subjectivity in the assessment of orthodontic treatment can be achieved with the use of an occlusal index. To implement an index for quality assurance purposes is time-consuming and subject to the inherent error of the index. Quality assessment of orthodontic treatment on a routine basis has been difficult to implement in private practice. To investigate whether a clinician can accurately apply the American Board of Orthodontics Objective Grading System by direct visual inspection instead of measuring individual traits. A random sample of 30 cases was selected, including pretreatment and post-treatment upper and lower study casts and panoramic radiographs. The cases were examined and scored with the standardized measuring gauge according to the protocol provided by the American Board of Orthodontics (ABO). The records were re-examined 6 weeks later and the individual traits scored by visual inspection (VI). There were no significant differences between the pre- and post-treatment ABO gauge and VI scores. This study suggests that occlusal traits defined by the ABO Objective Grading System can be accurately assessed by visual inspection. The VI score provides a simple and convenient method for critical evaluation of treatment outcome by a clinician.

  9. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    Science.gov (United States)

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  10. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  11. Real Time Systems

    DEFF Research Database (Denmark)

    Christensen, Knud Smed

    2000-01-01

    Describes fundamentals of parallel programming and a kernel for that. Describes methods for modelling and checking parallel problems. Real time problems.......Describes fundamentals of parallel programming and a kernel for that. Describes methods for modelling and checking parallel problems. Real time problems....

  12. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  13. Advanced Visual and Instruction Systems for Maintenance Support (AVIS-MS)

    National Research Council Canada - National Science Library

    Badler, Norman I; Allbeck, Jan M

    2006-01-01

    .... Moreover, the realities of real-world maintenance may not permit the hardware indulgences and rigid controls of laboratory settings for visualization and training systems, and at the same time...

  14. Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system

    KAUST Repository

    Hollt, Thomas; Altaf, Muhammad; Mandli, Kyle T.; Hadwiger, Markus; Dawson, Clint N.; Hoteit, Ibrahim

    2015-01-01

    allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations

  15. Adaptive Kalman filtering for real-time mapping of the visual field

    Science.gov (United States)

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  16. Distributed Visualization

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...

  17. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  18. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    Science.gov (United States)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows

  19. Visualization system for grid environment in the nuclear field

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Matsumoto, Nobuko; Idomura, Yasuhiro; Tani, Masayuki

    2006-01-01

    An innovative scientific visualization system is needed to integratedly visualize large amount of data which are distributedly generated in remote locations as a result of a large-scale numerical simulation using a grid environment. One of the important functions in such a visualization system is a parallel visualization which enables to visualize data using multiple CPUs of a supercomputer. The other is a distributed visualization which enables to execute visualization processes using a local client computer and remote computers. We have developed a toolkit including these functions in cooperation with the commercial visualization software AVS/Express, called Parallel Support Toolkit (PST). PST can execute visualization processes with three kinds of parallelism (data parallelism, task parallelism and pipeline parallelism) using local and remote computers. We have evaluated PST for large amount of data generated by a nuclear fusion simulation. Here, two supercomputers Altix3700Bx2 and Prism installed in JAEA are used. From the evaluation, it can be seen that PST has a potential to efficiently visualize large amount of data in a grid environment. (author)

  20. Development of an automatic visual grading system for grafting seedlings

    Directory of Open Access Journals (Sweden)

    Subo Tian

    2017-01-01

    Full Text Available In this study, a visual grading system of vegetable grafting machine was developed. The study described key technology of visual grading system of vegetable grafting machine. First, the contrasting experiment was conducted between acquired images under blue background light and natural light conditions, with the blue background light chosen as lighting source. The Visual C++ platform with open-source computer vision library (Open CV was used for the image processing. Subsequently, maximum frequency of total number of 0-valued pixels was predicted and used to extract the measurements of scion and rootstock stem diameters. Finally, the developed integrated visual grading system was experimented with 100 scions and rootstock seedlings. The results showed that success rate of grading reached up to 98%. This shows that selection and grading of scion and rootstock could be fully automated with this developed visual grading system. Hence, this technology would be greatly helpful for improving the grading accuracy and efficiency.

  1. New data visualization of the LHC Era Monitoring (Lemon) system

    International Nuclear Information System (INIS)

    Ivan, Fedorko; Veronique, Lefebure; Daniel, Lenkes; Omar, Pera Mira

    2012-01-01

    In the last few years, new requirements have been received for visualization of monitoring data: advanced graphics, flexibility in configuration and decoupling of the presentation layer from the monitoring repository. Lemonweb is the data visualization component of the LHC Era Monitoring (Lemon) system. Lemonweb consists of two subcomponents: a data collector and a web visualization interface. The data collector is a daemon, implemented in Python, responsible for data gathering from the central monitoring repository and storing into time series data structures. Data is stored on disk in Round Robin Database (RRD) files: one file per monitored entity, with set of entity related metrics. Entities may be grouped into a hierarchical structure, called “clusters” and supporting mathematical operations over entities and clusters (e.g. cluster A + cluster B /clusters C – entity XY). Using the configuration information, a cluster definition is evaluated in the collector engine and, at runtime, a sequence of data selects is built, to optimize access to the central monitoring repository. In this article, an overview of the design and architecture as well as highlights of some implemented features will be presented.

  2. Visualizing Time Projection Chamber Data for Education and Outreach

    Science.gov (United States)

    Crosby, Jacob

    2017-09-01

    The widespread availability of portable computers in the form of smartphones provides a unique opportunity to introduce scientific concepts to a broad audience, for the purpose of education, or for the purpose of sharing exciting developments and research. Unity, a free game development platform, has been used to develop a program to visualize 3-D events from a Time Projection Chamber (TPC). The program can be presented as a Virtual Reality (VR) application on a smartphone, which can serve as a standalone demonstration for interested individuals, or as a resource for educators. An interactive experience to watch nuclear events unfold demonstrates the principles of particle detection with a TPC, as well as providing information about the particles present. Different kinds of reactions can be showcased. The current state of tools within this program for outreach and educational purposes will be highlighted and presented in this poster, along with key design concerns and optimizations necessary for running an interactive VR app. The events highlighted in this program are from the S πRIT TPC, but the program can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.

  3. Time manages interference in visual short-term memory.

    Science.gov (United States)

    Smith, Amy V; McKeown, Denis; Bunce, David

    2017-09-01

    Emerging evidence suggests that age-related declines in memory may reflect a failure in pattern separation, a process that is believed to reduce the encoding overlap between similar stimulus representations during memory encoding. Indeed, behavioural pattern separation may be indexed by a visual continuous recognition task in which items are presented in sequence and observers report for each whether it is novel, previously viewed (old), or whether it shares features with a previously viewed item (similar). In comparison to young adults, older adults show a decreased pattern separation when the number of items between "old" and "similar" items is increased. Yet the mechanisms of forgetting underpinning this type of recognition task are yet to be explored in a cognitively homogenous group, with careful control over the parameters of the task, including elapsing time (a critical variable in models of forgetting). By extending the inter-item intervals, number of intervening items and overall decay interval, we observed in a young adult sample (N = 35, M age  = 19.56 years) that the critical factor governing performance was inter-item interval. We argue that tasks using behavioural continuous recognition to index pattern separation in immediate memory will benefit from generous inter-item spacing, offering protection from inter-item interference.

  4. Visualizing time-varying harmonics using filter banks

    NARCIS (Netherlands)

    Duque, C.A.; Da Silveira, P.M.; Ribeiro, P.F.

    2011-01-01

    Although it is well known that Fourier analysis is in reality only accurately applicable to steady state waveforms, it is a widely used tool to study and monitor time-varying signals, such as are commonplace in electrical power systems. The disadvantages of Fourier analysis, such as frequency

  5. Visualizing astrophysical N-body systems

    International Nuclear Information System (INIS)

    Dubinski, John

    2008-01-01

    I begin with a brief history of N-body simulation and visualization and then go on to describe various methods for creating images and animations of modern simulations in cosmology and galactic dynamics. These techniques are incorporated into a specialized particle visualization software library called MYRIAD that is designed to render images within large parallel N-body simulations as they run. I present several case studies that explore the application of these methods to animations in star clusters, interacting galaxies and cosmological structure formation.

  6. Real-time coloured visualization of phase flows by the schlieren method

    Science.gov (United States)

    Arbuzov, V. A.; Dubnistchev, Yu. N.

    1991-04-01

    A coloured real-time visualizer of optical inhomogeneities comprising a bichromatic schlieren system, video camera and colour monitor has been developed. The schlieren system represents a function Foucault-Hilbert transformation provided with an amplitude spatial frequency filter, or a quadrant Foucault knife edge. Two colour-coded complementary Toepler-grams are obtained in the exit plane of this schlieren system. Their summed image is then recorded by the video camera and displayed on the screen of the colour monitor. The schlieren photograph of internal gravity waves, generated by the cylindrical body motion in the reservoir filled with the stratified liquid, is presented.

  7. Autonomous docking control of visual-servo type underwater vehicle system aiming at underwater automatic charging

    International Nuclear Information System (INIS)

    Yanou, Akira; Ohnishi, Shota; Ishiyama, Shintaro; Minami, Mamoru

    2015-01-01

    A visual-servo type remotely operated vehicle (ROV) system with binocular wide-angle lens was developed to survey submarine resources, decontaminate radiation from mud in dam lake and so on. This paper explores the experiments on regulator performance and underwater docking of the robot system utilizing Genetic Algorithm (GA) for real-time recognition of the robot's relative position and posture through 3D marker. The visual servoing performances have been verified as follows; (1) The stability performances of the proposed regulator system have been evaluated by exerting abrupt distrubane force while the ROV is controlled by visual servoing. (2) The proposed system can track time-variant desired target position in x-axis (front-back direction of the robot). (3) The underwater docking can be completed by switching visual servoing and docking modes based on the error threshold, and by giving time-varying desired target position and orientation to the controller as a desired pose. (author)

  8. The LCLS Timing Event System

    Energy Technology Data Exchange (ETDEWEB)

    Dusatko, John; Allison, S.; Browne, M.; Krejcik, P.; /SLAC

    2012-07-23

    The Linac Coherent Light Source requires precision timing trigger signals for various accelerator diagnostics and controls at SLAC-NAL. A new timing system has been developed that meets these requirements. This system is based on COTS hardware with a mixture of custom-designed units. An added challenge has been the requirement that the LCLS Timing System must co-exist and 'know' about the existing SLC Timing System. This paper describes the architecture, construction and performance of the LCLS timing event system.

  9. Thoracic ROM measurement system with visual bio-feedback: system design and biofeedback evaluation.

    Science.gov (United States)

    Ando, Takeshi; Kawamura, Kazuya; Fujitani, Junko; Koike, Tomokazu; Fujimoto, Masashi; Fujie, Masakatsu G

    2011-01-01

    Patients with diseases such as chronic obstructive pulmonary disease (COPD) need to improve their thorax mobility. Thoracic ROM is one of the simplest and most useful indexes to evaluate the respiratory function. In this paper, we have proposed the prototype of a simple thoracic ROM measurement system with real-time visual bio-feedback in the chest expansion test. In this system, the thoracic ROM is measured using a wire-type linear encoder whose wire is wrapped around the thorax. In this paper, firstly, the repeatability and reliability of measured thoracic ROM was confirmed as a first report of the developed prototype. Secondly, we analyzed the effect of the bio-feedback system on the respiratory function. The result of the experiment showed that it was easier to maintain a large and stable thoracic ROM during deep breathing by using the real-time visual biofeedback system of the thoracic ROM.

  10. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    Science.gov (United States)

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  11. A reference web architecture and patterns for real-time visual analytics on large streaming data

    Science.gov (United States)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  12. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  13. Batch management based monitoring system: design, implement, and visualization

    International Nuclear Information System (INIS)

    Kan Bowen; Shi Jingyan

    2012-01-01

    Torque, an efficient PBS (Portable Batch System)-based open source Resource Management system, was originally developed by Ames research center of NASA, which was designed to satisfy the computing requirements of heterogeneous network. With the development of distributed computing, Torque has been widely used in high performance computing cluster. However, because of the lack of a well designed monitoring system, it is difficult to monitor, record, and control, leading to low stability, reliability and manageability. To overcome those problems, this paper designs and implements an adaptive lightweight monitoring system for torque from five aspects. 1) A lightweight circulating filtration logging system is developed to obtain the real-time running status of torque; 2) One uniform interface was provided for administrators to define monitoring commands, which can query management resources of torque; 3) Storage strategy is designed to make monitoring information persistent; 4) One uniform interface is provided for users to customized alarms, which can submit exceptions and errors to users via emails and SMS in real time; 5) HTML5 technology is applied in the customizable visualization of the jobs' status in torque in real time. (authors)

  14. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  15. Experimental consideration for realizing image based visual servo control system

    International Nuclear Information System (INIS)

    Ishikawa, N.; Suzuki, K.; Fujii, Y.; Usui, H.

    1995-01-01

    In this study, we consider the experimental aspect of image based visual servo control system. The items considered are the following; 1) Inertial parameter estimation, 2) Focal point estimation, 3) Controller performance for the system with delay. From the experimental result of visual control, it is found that the system is very sensitive to the controller gain because of the computational delay of vision. In order to establish a satisfactory delay compensation, more investigations on controller design are required. (author)

  16. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    International Nuclear Information System (INIS)

    Jarvis, Lesley A.; Zhang, Rongxiao; Gladstone, David J.; Jiang, Shudong; Hitchcock, Whitney; Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael; Pogue, Brian W.

    2014-01-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy

  17. Concurrent systems and time synchronization

    Science.gov (United States)

    Burgin, Mark; Grathoff, Annette

    2018-05-01

    In the majority of scientific fields, system dynamics is described assuming existence of unique time for the whole system. However, it is established theoretically, for example, in relativity theory or in the system theory of time, and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, etc. In spite of this, there are no wide-ranging scientific approaches to exploration of such systems. Therefore, the goal of this paper is to study systems with this property. We call them concurrent systems because processes in them can go, events can happen and actions can be performed in different time scales. The problem of time synchronization is specifically explored.

  18. Robust Real-Time Tracking for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Aguilera Josep

    2007-01-01

    Full Text Available This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i motion detection using a layered background model, (ii object tracking based on local appearance, (iii hierarchical object recognition, and (iv fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed.

  19. Prototyping real-time systems

    OpenAIRE

    Clynch, Gary

    1994-01-01

    The traditional software development paradigm, the waterfall life cycle model, is defective when used for developing real-time systems. This thesis puts forward an executable prototyping approach for the development of real-time systems. A prototyping system is proposed which uses ESML (Extended Systems Modelling Language) as a prototype specification language. The prototyping system advocates the translation of non-executable ESML specifications into executable LOOPN (Language of Object ...

  20. Design of Interactive Visualizations of Movies in Space and Time

    OpenAIRE

    Jorge, Ana Nunes

    2017-01-01

    Considered an important art form, a source of entertainment and a powerful method for educating, movies have the great power to affect us perceptually, cognitively and emotionally. By integrating various symbol systems like image, audio, and text over time, they are very rich. Moreover, technological advances are making a large amount of movies and related information available over the years, and these media are increasingly being created, shared and accessed from different pl...

  1. Visualization of time-of-flight neutron diffraction data

    International Nuclear Information System (INIS)

    Mikkelson, D.J.; Price, D.L.; Worlton, T.G.

    1995-01-01

    The glass, liquids and amorphous materials diffractometer (GLAD) is a new instrument at the intense pulsed neutron source (IPNS) at Argonne National Laboratory. The GLAD currently has 218 linear position sensitive detectors arranged in five banks. Raw data collected from the instrument are typically split into 1000-1500 angular groups each of which contains approximately 2000 time channels. In order to obtain a meaningful overview of such a large amount of data, an interactive system to view the data has been designed. The system was implemented in C using the graphical kernel system (GKS) for portability.The system treats data from each bank of detectors as a three-dimensional data set with detector number, position along detector and time of flight as the three coordinate axes. The software then slices the data parallel to any of the coordinate planes and displays the slices as images. This approach has helped with the detailed analysis of detector electronics, verification of instrument calibration and resolution determination. In addition, it has helped to identify low-level background signals and provided insight into the overall operation of the instrument. ((orig.))

  2. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    Science.gov (United States)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  3. Time course of affective bias in visual attention: convergent evidence from steady-state visual evoked potentials and behavioral data.

    Science.gov (United States)

    Hindi Attar, Catherine; Andersen, Søren K; Müller, Matthias M

    2010-12-01

    Selective attention to a primary task can be biased by the occurrence of emotional distractors that involuntary attract attention due to their intrinsic stimulus significance. What is largely unknown is the time course and magnitude of competitive interactions between a to-be-attended foreground task and emotional distractors. We used pleasant, unpleasant and neutral pictures from the International Affective Picture System (IAPS) that were either presented in intact or phase-scrambled form. Pictures were superimposed by a flickering display of moving random dots, which constituted the primary task and enabled us to record steady-state visual evoked potentials (SSVEPs) as a continuous measure of attentional resource allocation directed to the task. Subjects were required to attend to the dots and to detect short intervals of coherent motion while ignoring the background pictures. We found that pleasant and unpleasant relative to neutral pictures more strongly influenced task-related processing as reflected in a significant decrease in SSVEP amplitudes and target detection rates, both covering a time window of several hundred milliseconds. Strikingly, the effect of semantic relative to phase-scrambled pictures on task-related activity was much larger, emerged earlier and lasted longer in time compared to the specific effect of emotion. The observed differences in size and duration of time courses of semantic and emotional picture processing strengthen the assumption of separate functional mechanisms for both processes rather than a general boosting of neural activity in favor of emotional stimulus processing. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  5. The visual system of diurnal raptors: updated review.

    Science.gov (United States)

    González-Martín-Moro, J; Hernández-Verdejo, J L; Clement-Corral, A

    2017-05-01

    Diurnal birds of prey (raptors) are considered the group of animals with highest visual acuity (VA). The purpose of this work is to review all the information recently published about the visual system of this group of animals. A bibliographic search was performed in PubMed. The algorithm used was (raptor OR falcon OR kestrel OR hawk OR eagle) AND (vision OR «visual acuity» OR eye OR macula OR retina OR fovea OR «nictitating membrane» OR «chromatic vision» OR ultraviolet). The search was restricted to the «Title» and «Abstract» fields, and to non-human species, without time restriction. The proposed algorithm located 97 articles. Birds of prey are endowed with the highest VA of the animal kingdom. However most of the works study one individual or a small group of individuals, and the methodology is heterogeneous. The most studied bird is the Peregrine falcon (Falco peregrinus), with an estimated VA of 140 cycles/degree. Some eagles are endowed with similar VA. The tubular shape of the eye, the large pupil, and a high density of photoreceptors make this extraordinary VA possible. In some species, histology and optic coherence tomography demonstrate the presence of 2foveas. The nasal fovea (deep fovea) has higher VA. Nevertheless, the exact function of each fovea is unknown. The vitreous contained in the deep fovea could behave as a third lens, adding some magnification to the optic system. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Real time drift measurement for colloidal probe atomic force microscope: a visual sensing approach

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuliang, E-mail: wangyuliang@buaa.edu.cn; Bi, Shusheng [Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing 100191 (China); Wang, Huimin [Department of Materials Science and Engineering, The Ohio State University, 2041 College Rd., Columbus, OH 43210 (United States)

    2014-05-15

    Drift has long been an issue in atomic force microscope (AFM) systems and limits their ability to make long time period measurements. In this study, a new method is proposed to directly measure and compensate for the drift between AFM cantilevers and sample surfaces in AFM systems. This was achieved by simultaneously measuring z positions for beads at the end of an AFM colloidal probe and on sample surface through an off-focus image processing based visual sensing method. The working principle and system configuration are presented. Experiments were conducted to validate the real time drift measurement and compensation. The implication of the proposed method for regular AFM measurements is discussed. We believe that this technique provides a practical and efficient approach for AFM experiments requiring long time period measurement.

  7. Dependable Real-Time Systems

    Science.gov (United States)

    1991-09-30

    0196 or 413 545-0720 PI E-mail Address: krithi@nirvan.cs.umass.edu, stankovic(ocs.umass.edu Grant or Contract Title: Dependable Real - Time Systems Grant...Dependable Real - Time Systems " Grant or Contract Number: N00014-85-k-0398 L " Reporting Period: 1 Oct 87 - 30 Sep 91 , 2. Summary of Accomplishments ’ 2.1 Our...in developing a sound approach to scheduling tasks in complex real - time systems , (2) developed a real-time operating system kernel, a preliminary

  8. Official Union Time Tracking System

    Data.gov (United States)

    Social Security Administration — Official Union Time Tracking System captures the reporting and accounting of the representational activity for all American Federation of Government Employees (AFGE)...

  9. Towards a visual modeling approach to designing microelectromechanical system transducers

    Science.gov (United States)

    Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim

    1999-12-01

    In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).

  10. Sextant: Visualizing time-evolving linked geospatial data

    NARCIS (Netherlands)

    C. Nikolaou (Charalampos); K. Dogani (Kallirroi); K. Bereta (Konstantina); G. Garbis (George); M. Karpathiotakis (Manos); K. Kyzirakos (Konstantinos); M. Koubarakis (Manolis)

    2015-01-01

    textabstractThe linked open data cloud is constantly evolving as datasets get continuously updated with newer versions. As a result, representing, querying, and visualizing the temporal dimension of linked data is crucial. This is especially important for geospatial datasets that form the backbone

  11. Real-time decreased sensitivity to an audio-visual illusion during goal-directed reaching.

    Directory of Open Access Journals (Sweden)

    Luc Tremblay

    Full Text Available In humans, sensory afferences are combined and integrated by the central nervous system (Ernst MO, Bülthoff HH (2004 Trends Cogn. Sci. 8: 162-169 and appear to provide a holistic representation of the environment. Empirical studies have repeatedly shown that vision dominates the other senses, especially for tasks with spatial demands. In contrast, it has also been observed that sound can strongly alter the perception of visual events. For example, when presented with 2 flashes and 1 beep in a very brief period of time, humans often report seeing 1 flash (i.e. fusion illusion, Andersen TS, Tiippana K, Sams M (2004 Brain Res. Cogn. Brain Res. 21: 301-308. However, it is not known how an unfolding movement modulates the contribution of vision to perception. Here, we used the audio-visual illusion to demonstrate that goal-directed movements can alter visual information processing in real-time. Specifically, the fusion illusion was linearly reduced as a function of limb velocity. These results suggest that cue combination and integration can be modulated in real-time by goal-directed behaviors; perhaps through sensory gating (Chapman CE, Beauchamp E (2006 J. Neurophysiol. 96: 1664-1675 and/or altered sensory noise (Ernst MO, Bülthoff HH (2004 Trends Cogn. Sci. 8: 162-169 during limb movements.

  12. Heliborne time domain electromagnetic system

    International Nuclear Information System (INIS)

    Bhattacharya, S.

    2009-01-01

    Atomic Minerals Directorate (AMD), are using heliborne and ground time domain electromagnetic (TDEM) system for the exploration of deep seated unconformity type uranium deposits. Uranium has been explored in various parts of the world like Athabasca basin using time domain electromagnetic system. AMD has identified some areas in India where such deposits are available. Apart from uranium exploration, the TDEM systems are used for the exploration of deep seated minerals like diamonds. Bhabha Atomic Research Centre (BARC) is involved in the indigenous design of the heliborne time domain system since this system is useful for DAE and also it has a scope of wide application. In this paper we discuss about the principle of time domain electromagnetic systems, their capabilities and the development and problems of such system for various other mineral exploration. (author)

  13. Methods and means for building a system of visual images forming in gis of critical important objects protection

    Directory of Open Access Journals (Sweden)

    Mykhailo Vasiukhin

    2013-12-01

    Full Text Available Requirements for the visualization of dynamic scenes in security systems for are increasing in recent years. This requires develop a methods and tools for visualization of dynamic scenes for monitoring and managing the system of security like “human-operator”. The paper presents a model map data from which is a base for building real time map data, and methods of real time visualization of moving characters in air.

  14. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions.

    Science.gov (United States)

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-09-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.

  15. Making Time for Nature: Visual Exposure to Natural Environments Lengthens Subjective Time Perception and Reduces Impulsivity.

    Directory of Open Access Journals (Sweden)

    Meredith S Berry

    Full Text Available Impulsivity in delay discounting is associated with maladaptive behaviors such as overeating and drug and alcohol abuse. Researchers have recently noted that delay discounting, even when measured by a brief laboratory task, may be the best predictor of human health related behaviors (e.g., exercise currently available. Identifying techniques to decrease impulsivity in delay discounting, therefore, could help improve decision-making on a global scale. Visual exposure to natural environments is one recent approach shown to decrease impulsive decision-making in a delay discounting task, although the mechanism driving this result is currently unknown. The present experiment was thus designed to evaluate not only whether visual exposure to natural (mountains, lakes relative to built (buildings, cities environments resulted in less impulsivity, but also whether this exposure influenced time perception. Participants were randomly assigned to either a natural environment condition or a built environment condition. Participants viewed photographs of either natural scenes or built scenes before and during a delay discounting task in which they made choices about receiving immediate or delayed hypothetical monetary outcomes. Participants also completed an interval bisection task in which natural or built stimuli were judged as relatively longer or shorter presentation durations. Following the delay discounting and interval bisection tasks, additional measures of time perception were administered, including how many minutes participants thought had passed during the session and a scale measurement of whether time "flew" or "dragged" during the session. Participants exposed to natural as opposed to built scenes were less impulsive and also reported longer subjective session times, although no differences across groups were revealed with the interval bisection task. These results are the first to suggest that decreased impulsivity from exposure to natural as

  16. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  17. Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor

    Science.gov (United States)

    Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn

    2009-01-01

    The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion

  18. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  19. Interactive visual steering--rapid visual prototyping of a common rail injection system.

    Science.gov (United States)

    Matković, Kresimir; Gracanin, Denis; Jelović, Mario; Hauser, Helwig

    2008-01-01

    Interactive steering with visualization has been a common goal of the visualization research community for twenty years, but it is rarely ever realized in practice. In this paper we describe a successful realization of a tightly coupled steering loop, integrating new simulation technology and interactive visual analysis in a prototyping environment for automotive industry system design. Due to increasing pressure on car manufacturers to meet new emission regulations, to improve efficiency, and to reduce noise, both simulation and visualization are pushed to their limits. Automotive system components, such as the powertrain system or the injection system have an increasing number of parameters, and new design approaches are required. It is no longer possible to optimize such a system solely based on experience or forward optimization. By coupling interactive visualization with the simulation back-end (computational steering), it is now possible to quickly prototype a new system, starting from a non-optimized initial prototype and the corresponding simulation model. The prototyping continues through the refinement of the simulation model, of the simulation parameters and through trial-and-error attempts to an optimized solution. The ability to early see the first results from a multidimensional simulation space--thousands of simulations are run for a multidimensional variety of input parameters--and to quickly go back into the simulation and request more runs in particular parameter regions of interest significantly improves the prototyping process and provides a deeper understanding of the system behavior. The excellent results which we achieved for the common rail injection system strongly suggest that our approach has a great potential of being generalized to other, similar scenarios.

  20. Real-time feedback on nonverbal clinical communication. Theoretical framework and clinician acceptance of ambient visual design.

    Science.gov (United States)

    Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A

    2014-01-01

    This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient

  1. The Two Visual Systems Hypothesis: new challenges and insights from visual form agnosic patient DF

    Directory of Open Access Journals (Sweden)

    Robert Leslie Whitwell

    2014-12-01

    Full Text Available Patient DF, who developed visual form agnosia following carbon monoxide poisoning, is still able to use vision to adjust the configuration of her grasping hand to the geometry of a goal object. This striking dissociation between perception and action in DF provided a key piece of evidence for the formulation of Goodale and Milner’s Two Visual Systems Hypothesis (TVSH. According to the TVSH, the ventral stream plays a critical role in constructing our visual percepts, whereas the dorsal stream mediates the visual control of action, such as visually guided grasping. In this review, we discuss recent studies of DF that provide new insights into the functional organization of the dorsal and ventral streams. We confirm recent evidence that DF has dorsal as well as ventral brain damage – and that her dorsal-stream lesions and surrounding atrophy have increased in size since her first published brain scan. We argue that the damage to DF’s dorsal stream explains her deficits in directing actions at targets in the periphery. We then focus on DF’s ability to accurately adjust her in-flight hand aperture to changes in the width of goal objects (grip scaling whose dimensions she cannot explicitly report. An examination of several studies of DF’s grip scaling under natural conditions reveals a modest though significant deficit. Importantly, however, she continues to show a robust dissociation between form vision for perception and form vision for action. We also review recent studies that explore the role of online visual feedback and terminal haptic feedback in the programming and control of her grasping. These studies make it clear that DF is no more reliant on visual or haptic feedback than are neurologically-intact individuals. In short, we argue that her ability to grasp objects depends on visual feedforward processing carried out by visuomotor networks in her dorsal stream that function in the much the same way as they do in neurologically

  2. Real-time Position Based Population Data Analysis and Visualization Using Heatmap for Hazard Emergency Response

    Science.gov (United States)

    Ding, R.; He, T.

    2017-12-01

    With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people

  3. Visualizing Time-Varying Distribution Data in EOS Application

    Science.gov (United States)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  4. DESIGN OF A VISUAL INTERFACE FOR ANN BASED SYSTEMS

    Directory of Open Access Journals (Sweden)

    Ramazan BAYINDIR

    2008-01-01

    Full Text Available Artificial intelligence application methods have been used for control of many systems with parallel of technological development besides conventional control techniques. Increasing of artificial intelligence applications have required to education in this area. In this paper, computer based an artificial neural network (ANN software has been presented to learning and understanding of artificial neural networks. By means of the developed software, the training of the artificial neural network according to the inputs provided and a test action can be performed by changing the components such as iteration number, momentum factor, learning ratio, and efficiency function of the artificial neural networks. As a result of the study a visual education set has been obtained that can easily be adapted to the real time application.

  5. Shared visual attention and memory systems in the Drosophila brain.

    Directory of Open Access Journals (Sweden)

    Bruno van Swinderen

    Full Text Available BACKGROUND: Selective attention and memory seem to be related in human experience. This appears to be the case as well in simple model organisms such as the fly Drosophila melanogaster. Mutations affecting olfactory and visual memory formation in Drosophila, such as in dunce and rutabaga, also affect short-term visual processes relevant to selective attention. In particular, increased optomotor responsiveness appears to be predictive of visual attention defects in these mutants. METHODOLOGY/PRINCIPAL FINDINGS: To further explore the possible overlap between memory and visual attention systems in the fly brain, we screened a panel of 36 olfactory long term memory (LTM mutants for visual attention-like defects using an optomotor maze paradigm. Three of these mutants yielded high dunce-like optomotor responsiveness. We characterized these three strains by examining their visual distraction in the maze, their visual learning capabilities, and their brain activity responses to visual novelty. We found that one of these mutants, D0067, was almost completely identical to dunce(1 for all measures, while another, D0264, was more like wild type. Exploiting the fact that the LTM mutants are also Gal4 enhancer traps, we explored the sufficiency for the cells subserved by these elements to rescue dunce attention defects and found overlap at the level of the mushroom bodies. Finally, we demonstrate that control of synaptic function in these Gal4 expressing cells specifically modulates a 20-30 Hz local field potential associated with attention-like effects in the fly brain. CONCLUSIONS/SIGNIFICANCE: Our study uncovers genetic and neuroanatomical systems in the fly brain affecting both visual attention and odor memory phenotypes. A common component to these systems appears to be the mushroom bodies, brain structures which have been traditionally associated with odor learning but which we propose might be also involved in generating oscillatory brain activity

  6. TimeBench: a data model and software library for visual analytics of time-oriented data.

    Science.gov (United States)

    Rind, Alexander; Lammarsch, Tim; Aigner, Wolfgang; Alsallakh, Bilal; Miksch, Silvia

    2013-12-01

    Time-oriented data play an essential role in many Visual Analytics scenarios such as extracting medical insights from collections of electronic health records or identifying emerging problems and vulnerabilities in network traffic. However, many software libraries for Visual Analytics treat time as a flat numerical data type and insufficiently tackle the complexity of the time domain such as calendar granularities and intervals. Therefore, developers of advanced Visual Analytics designs need to implement temporal foundations in their application code over and over again. We present TimeBench, a software library that provides foundational data structures and algorithms for time-oriented data in Visual Analytics. Its expressiveness and developer accessibility have been evaluated through application examples demonstrating a variety of challenges with time-oriented data and long-term developer studies conducted in the scope of research and student projects.

  7. Anatomy and physiology of the afferent visual system.

    Science.gov (United States)

    Prasad, Sashank; Galetta, Steven L

    2011-01-01

    The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Defining the cortical visual systems: "what", "where", and "how"

    Science.gov (United States)

    Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    2001-01-01

    The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.

  9. Visual color matching system based on RGB LED light source

    Science.gov (United States)

    Sun, Lei; Huang, Qingmei; Feng, Chen; Li, Wei; Wang, Chaofeng

    2018-01-01

    In order to study the property and performance of LED as RGB primary color light sources on color mixture in visual psychophysical experiments, and to find out the difference between LED light source and traditional light source, a visual color matching experiment system based on LED light sources as RGB primary colors has been built. By simulating traditional experiment of metameric color matching in CIE 1931 RGB color system, it can be used for visual color matching experiments to obtain a set of the spectral tristimulus values which we often call color-matching functions (CMFs). This system consists of three parts: a monochromatic light part using blazed grating, a light mixing part where the summation of 3 LED illuminations are to be visually matched with a monochromatic illumination, and a visual observation part. The three narrow band LEDs used have dominant wavelengths of 640 nm (red), 522 nm (green) and 458 nm (blue) respectively and their intensities can be controlled independently. After the calibration of wavelength and luminance of LED sources with a spectrophotometer, a series of visual color matching experiments have been carried out by 5 observers. The results are compared with those from CIE 1931 RGB color system, and have been used to compute an average locus for the spectral colors in the color triangle, with white at the center. It has been shown that the use of LED is feasible and has the advantages of easy control, good stability and low cost.

  10. Entrainment to a real time fractal visual stimulus modulates fractal gait dynamics.

    Science.gov (United States)

    Rhea, Christopher K; Kiefer, Adam W; D'Andrea, Susan E; Warren, William H; Aaron, Roy K

    2014-08-01

    Fractal patterns characterize healthy biological systems and are considered to reflect the ability of the system to adapt to varying environmental conditions. Previous research has shown that fractal patterns in gait are altered following natural aging or disease, and this has potential negative consequences for gait adaptability that can lead to increased risk of injury. However, the flexibility of a healthy neurological system to exhibit different fractal patterns in gait has yet to be explored, and this is a necessary step toward understanding human locomotor control. Fifteen participants walked for 15min on a treadmill, either in the absence of a visual stimulus or while they attempted to couple the timing of their gait with a visual metronome that exhibited a persistent fractal pattern (contained long-range correlations) or a random pattern (contained no long-range correlations). The stride-to-stride intervals of the participants were recorded via analog foot pressure switches and submitted to detrended fluctuation analysis (DFA) to determine if the fractal patterns during the visual metronome conditions differed from the baseline (no metronome) condition. DFA α in the baseline condition was 0.77±0.09. The fractal patterns in the stride-to-stride intervals were significantly altered when walking to the fractal metronome (DFA α=0.87±0.06) and to the random metronome (DFA α=0.61±0.10) (both p<.05 when compared to the baseline condition), indicating that a global change in gait dynamics was observed. A variety of strategies were identified at the local level with a cross-correlation analysis, indicating that local behavior did not account for the consistent global changes. Collectively, the results show that a gait dynamics can be shifted in a prescribed manner using a visual stimulus and the shift appears to be a global phenomenon. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Real-time visualization of perforin nanopore assembly

    Science.gov (United States)

    Leung, Carl; Hodel, Adrian W.; Brennan, Amelia J.; Lukoyanova, Natalya; Tran, Sharon; House, Colin M.; Kondos, Stephanie C.; Whisstock, James C.; Dunstone, Michelle A.; Trapani, Joseph A.; Voskoboinik, Ilia; Saibil, Helen R.; Hoogenboom, Bart W.

    2017-05-01

    Perforin is a key protein of the vertebrate immune system. Secreted by cytotoxic lymphocytes as soluble monomers, perforin can self-assemble into oligomeric pores of 10-20 nm inner diameter in the membranes of virus-infected and cancerous cells. These large pores facilitate the entry of pro-apoptotic granzymes, thereby rapidly killing the target cell. To elucidate the pathways of perforin pore assembly, we carried out real-time atomic force microscopy and electron microscopy studies. Our experiments reveal that the pore assembly proceeds via a membrane-bound prepore intermediate state, typically consisting of up to approximately eight loosely but irreversibly assembled monomeric subunits. These short oligomers convert to more closely packed membrane nanopore assemblies, which can subsequently recruit additional prepore oligomers to grow the pore size.

  12. Development of the Macro Command Editing Executive System for Factory Workers-Oriented Programless Visual Inspection System

    Science.gov (United States)

    Anezaki, Takashi; Wakitani, Kouichi; Nakamura, Masatoshi; Kubo, Hiroyasu

    Because visual inspection systems are difficult to tune, they create many problems for the kaizen process. This results in increased development costs and time to assure that the inspection systems function properly. In order to improve inspection system development, we designed an easy-tuning system called a “Program-less” visual inspection system. The ROI macro command which consisted of eight kinds of shape recognition macro commands and decision, operation, control commands was built. Furthermore, the macro command editing executive system was developed by the operation of only the GUI without editing source program. The validity of the ROI macro command was proved by the application of 488 places.

  13. Effects of Real-Time Visual Feedback on Pre-Service Teachers' Singing

    Science.gov (United States)

    Leong, S.; Cheng, L.

    2014-01-01

    This pilot study focuses on the use real-time visual feedback technology (VFT) in vocal training. The empirical research has two aims: to ascertain the effectiveness of the real-time visual feedback software "Sing & See" in the vocal training of pre-service music teachers and the teachers' perspective on their experience with…

  14. Timing Is Everything: One Teacher's Exploration of the Best Time to Use Visual Media in a Science Unit

    Science.gov (United States)

    Drury, Debra

    2006-01-01

    Kids today are growing up with televisions, movies, videos and DVDs, so it's logical to assume that this type of media could be motivating and used to great effect in the classroom. But at what point should film and other visual media be used? Are there times in the inquiry process when showing a film or incorporating other visual media is more…

  15. The Advanced LIGO timing system

    International Nuclear Information System (INIS)

    Bartos, Imre; Factourovich, Maxim; Marka, Szabolcs; Marka, Zsuzsa; Raics, Zoltan; Bork, Rolf; Heefner, Jay; Schwinberg, Paul; Sigg, Daniel

    2010-01-01

    Gravitational wave detection using a network of detectors relies upon the precise time stamping of gravitational wave signals. The relative arrival times between detectors are crucial, e.g. in recovering the source direction, an essential step in using gravitational waves for multi-messenger astronomy. Due to the large size of gravitational wave detectors, timing at different parts of a given detector also needs to be highly synchronized. In general, the requirement toward the precision of timing is determined such that, upon detection, the deduced (astro-) physical results should not be limited by the precision of timing. The Advanced LIGO optical timing distribution system is designed to provide UTC-synchronized timing information for the Advanced LIGO detectors that satisfies the above criterium. The Advanced LIGO timing system has modular structure, enabling quick and easy adaptation to the detector frame as well as possible changes or additions of components. It also includes a self-diagnostics system that enables the remote monitoring of the status of timing. After the description of the Advanced LIGO timing system, several tests are presented that demonstrate its precision and robustness.

  16. Psychophysical research progress of interocular suppression in amblyopic visual system

    OpenAIRE

    Jing-Jing Li; Yi Huang

    2016-01-01

    Some recent animal experiments and psychophysical studies indicate that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression, and interocular suppression is a key mechanism in visual deficits experienced by patients with amblyopia. The aim of this review is to provide an overview of recent psychophysical findings that have investigated the important role of interocular suppression in amblyopia, the measurement an...

  17. COMBINING INDEPENDENT VISUALIZATION AND TRACKING SYSTEMS FOR AUGMENTED REALITY

    Directory of Open Access Journals (Sweden)

    P. Hübner

    2018-05-01

    Full Text Available The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes.

  18. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  19. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    Johnston CT

    2006-01-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  20. Stereoscopy and the Human Visual System

    Science.gov (United States)

    Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.

    2012-01-01

    Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596

  1. The visual and remote analyzing software for a Linux-based radiation information acquisition system

    International Nuclear Information System (INIS)

    Fan Zhaoyang; Zhang Li; Chen Zhiqiang

    2003-01-01

    A visual and remote analyzing software for the radiation information, which has the merit of universality and credibility, is developed based on the Linux operating system and the TCP/IP network protocol. The software is applied to visually debug and real time monitor of the high-speed radiation information acquisition system, and a safe, direct and timely control can assured. The paper expatiates the designing thought of the software, which provides the reference for other software with the same purpose for the similar systems

  2. The effects of acute bout of cycling on auditory & visual reaction times.

    Science.gov (United States)

    Ashnagar, Zinat; Shadmehr, Azadeh; Jalaei, Shohreh

    2015-04-01

    The purpose of this study was to investigate the effects of an acute bout of cycling exercise on auditory choice reaction time, visual choice reaction time, auditory complex choice reaction time and visual complex choice reaction time. 29 subjects were randomly divided into experimental and control groups. The subjects of the experimental group carried out a single bout of submaximal cycling exercise. The auditory choice reaction time, visual choice reaction time, auditory complex choice reaction time and visual complex choice reaction times were measured before and after the exercise session. The reaction time tests were taken from the subjects by using Speed Anticipation and Reaction Tester (SART) software. In the control group, the reaction time tests were performed by the subjects with an interval of 30 min. In the experimental group, the percentage changes of mean auditory choice and complex choice reaction time values were significantly decreased in comparison with the control group (P visual choice and complex choice reaction times were decreased after the exercise, the changes were not significant (P > 0.05). An acute bout of cycling exercise improved the speed of auditory and visual reaction times in healthy young females. However, these positive changes were significantly observed only in the auditory reaction time tests in comparison with the control group. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Visualization and natural control systems for microscopy

    DEFF Research Database (Denmark)

    Taylor, Russell M.; Borland, David; Brooks, Frederick P.

    2005-01-01

    This chapter presents these microscope systems, along with brief descriptions of the science experiments driving the development of each system. Beginning with a discussion of the philosophy that has driven the Nanoscale Science Research Group (NSRG) and the methods used, the chapter describes th...

  4. Visual Cues for an Adaptive Expert System.

    Science.gov (United States)

    Miller, Helen B.

    NCR (National Cash Register) Corporation is pursuing opportunities to make their point of sale (POS) terminals easy to use and easy to learn. To approach the goal of making the technology invisible to the user, NCR has developed an adaptive expert prototype system for a department store POS operation. The structure for the adaptive system, the…

  5. Quantifying the Time Course of Visual Object Processing Using ERPs: It's Time to Up the Game

    Science.gov (United States)

    Rousselet, Guillaume A.; Pernet, Cyril R.

    2011-01-01

    Hundreds of studies have investigated the early ERPs to faces and objects using scalp and intracranial recordings. The vast majority of these studies have used uncontrolled stimuli, inappropriate designs, peak measurements, poor figures, and poor inferential and descriptive group statistics. These problems, together with a tendency to discuss any effect p  condition B. Here we describe the main limitations of face and object ERP research and suggest alternative strategies to move forward. The problems plague intracranial and surface ERP studies, but also studies using more advanced techniques – e.g., source space analyses and measurements of network dynamics, as well as many behavioral, fMRI, TMS, and LFP studies. In essence, it is time to stop amassing binary results and start using single-trial analyses to build models of visual perception. PMID:21779262

  6. Reading impairment in schizophrenia: dysconnectivity within the visual system.

    Science.gov (United States)

    Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël

    2014-01-01

    Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.

  7. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  8. Visual outcome after part time patching therapy in amblyopic children

    International Nuclear Information System (INIS)

    Jalis, D. M.; Butt, I. A.; Waqas, M.

    2006-01-01

    Record of patients attending the pediatric Ophthalmology Clinic run by an NGO Vision and Literacy international (Canada) Rawalpindi was examined. Patients who completed follow up period of at least 6 months were included. Total 115 patients were found to be affected by Amblyopia. Only fifty two of the patients fulfilled the criteria of inclusion. All the patients has best-corrected visual acuity checked by methods appropriate to age. Cycloplegic refraction was performed in every case. Fundus and slit lamp examination was done in every case to rule out organic pathology of retina and optic nerve. Glasses were prescribed where required and patching schedule prescribed. Patients were examined every six weeks for at least six months. On every visit vision was recorded and Cycloplegic refraction repeated where indicated. Out of total 2393 patients 115 were found amblyopic. Out of these 63 patients (54.78%) were lost to follow up. Rest 52 (46 %) completed at least six month of treatment. Of these twenty three were females and twenty nine were males. Ages ranged from 0-16 years. Thirteen patients had strabismus and rest had Anisometropic Amblyopia. Hypermetropia was found to be present in 44 patients (84.6 %), 6 has no significant refraction (11.53). Two patients were myopes (3.8%). Most of the patient exhibited varying degree of improvement except 10 who did not improve at all (19.2 %) 42 patient showed improved visual acuity (80.76 %). Shift towards right was observed. (author)

  9. Binding across space and time in visual working memory.

    Science.gov (United States)

    Karlsen, Paul Johan; Allen, Richard J; Baddeley, Alan D; Hitch, Graham J

    2010-04-01

    Recent studies of visual short-term memory have suggested that the binding of features such as color and shape into remembered objects is relatively automatic. A series of seven experiments broadened this investigation by comparing the immediate retention of colored shapes with performance when color and shape were separated either spatially or temporally, with participants required actively to form the bound object. Attentional load was manipulated with a demanding concurrent task, and retention in working memory was then tested using a single recognition probe. Both spatial and temporal separation of features tended to impair performance, as did the concurrent task. There was, however, no evidence for greater attentional disruption of performance as a result of either spatial or temporal separation of features. Implications for the process of binding in visual working memory are discussed, and an interpretation is offered in terms of the episodic buffer component of working memory, which is assumed to be a passive store capable of holding bound objects, but not of performing the binding.

  10. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  11. Interactive Web-based Visualization of Atomic Position-time Series Data

    Science.gov (United States)

    Thapa, S.; Karki, B. B.

    2017-12-01

    Extracting and interpreting the information contained in large sets of time-varying three dimensional positional data for the constituent atoms of simulated material is a challenging task. We have recently implemented a web-based visualization system to analyze the position-time series data extracted from the local or remote hosts. It involves a pre-processing step for data reduction, which involves skipping uninteresting parts of the data uniformly (at full atomic configuration level) or non-uniformly (at atomic species level or individual atom level). Atomic configuration snapshot is rendered using the ball-stick representation and can be animated by rendering successive configurations. The entire atomic dynamics can be captured as the trajectories by rendering the atomic positions at all time steps together as points. The trajectories can be manipulated at both species and atomic levels so that we can focus on one or more trajectories of interest, and can be also superimposed with the instantaneous atomic structure. The implementation was done using WebGL and Three.js for graphical rendering, HTML5 and Javascript for GUI, and Elasticsearch and JSON for data storage and retrieval within the Grails Framework. We have applied our visualization system to the simulation datatsets for proton-bearing forsterite (Mg2SiO4) - an abundant mineral of Earths upper mantle. Visualization reveals that protons (hydrogen ions) incorporated as interstitials are much more mobile than protons substituting the host Mg and Si cation sites. The proton diffusion appears to be anisotropic with high mobility along the x-direction, showing limited discrete jumps in other two directions.

  12. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  13. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  14. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    Directory of Open Access Journals (Sweden)

    Stephan Bender

    Full Text Available BACKGROUND: Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1 and catechol-O-methyltransferase genes (COMT on the time-course of visual processing in a contingent negative variation (CNV task. METHODS: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version. Early and late CNV as well as preceding visual evoked potential components were assessed. RESULTS: Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. CONCLUSIONS: Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  15. Wearing weighted backpack dilates subjective visual duration: The role of functional linkage between weight experience and visual timing

    Directory of Open Access Journals (Sweden)

    Lina eJia

    2015-09-01

    Full Text Available Bodily state plays a critical role in our perception. In the present study, we asked the question whether and how bodily experience of weights influences time perception. Participants judged durations of a picture (a backpack or a trolley bag presented on the screen, while wearing different weight backpacks or without backpack. The results showed that the subjective dura-tion of the backpack picture was dilated when participants wore a medium weighted backpack relative to an empty backpack or without backpack, regardless of identity (e.g., color of the visual backpack. However, the duration dilation was not manifested for the picture of trolley bag. These findings suggest that weight experience modulates visual duration estimation through the linkage between the wore backpack and to-be-estimated visual target. The con-gruent action affordance between the wore backpack and visual inputs plays a critical role in the functional linkage between inner experience and time perception. We interpreted our findings within the framework of embodied time perception.

  16. VISUAL: a software package for plotting data in the RADHEAT-V4 code system

    International Nuclear Information System (INIS)

    Sasaki, Toshihiko; Yamano, Naoki

    1984-03-01

    In this report, the features, the capabilities and the constitution of the VISUAL Software Package are presented. The one of the features is that the VISUAL provides a versatile graphic display tool to plot a wide variety of data of the RADHEAT-V4 code system. And the other is to enable a user to handle easily the executing data in the Conversational Management Mode named ''CMM''. The program adopts the adjustable dimension system to increase its flexibility. VISUAL generates two-dimensional drawing, contour line map and three dimensional drawing on TSS (Time Sharing System) digital graphic equipment, NLP (Nihongo Laser Printer) or COM(Computer Output Microfilm). It is easily possible to display the calculated and experimental data in a DATA-POOL by using these functions. The purpose of this report is to describe sufficient information to enable a user to use VISUAL profitabily. (author)

  17. Unification of three linear models for the transient visual system

    NARCIS (Netherlands)

    Brinker, den A.C.

    1989-01-01

    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is

  18. COALA-System for Visual Representation of Cryptography Algorithms

    Science.gov (United States)

    Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran

    2014-01-01

    Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…

  19. The People's Time Sharing System

    NARCIS (Netherlands)

    Tanenbaum, A.S.; Benson, W.H.

    1973-01-01

    A set of programs running under a multiprogramming batch operating system on the CDC 6600 which provide remote users with a time sharing service is described. The basis for the system is the ability of a user program to create job control statements during execution, thereby tricking the operating

  20. An Artificial Flexible Visual Memory System Based on an UV-Motivated Memristor.

    Science.gov (United States)

    Chen, Shuai; Lou, Zheng; Chen, Di; Shen, Guozhen

    2018-02-01

    For the mimicry of human visual memory, a prominent challenge is how to detect and store the image information by electronic devices, which demands a multifunctional integration to sense light like eyes and to memorize image information like the brain by transforming optical signals to electrical signals that can be recognized by electronic devices. Although current image sensors can perceive simple images in real time, the image information fades away when the external image stimuli are removed. The deficiency between the state-of-the-art image sensors and visual memory system inspires the logical integration of image sensors and memory devices to realize the sensing and memory process toward light information for the bionic design of human visual memory. Hence, a facile architecture is designed to construct artificial flexible visual memory system by employing an UV-motivated memristor. The visual memory arrays can realize the detection and memory process of UV light distribution with a patterned image for a long-term retention and the stored image information can be reset by a negative voltage sweep and reprogrammed to the same or an other image distribution, which proves the effective reusability. These results provide new opportunities for the mimicry of human visual memory and enable the flexible visual memory device to be applied in future wearable electronics, electronic eyes, multifunctional robotics, and auxiliary equipment for visual handicapped. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The time course of the influence of colour terms on visual processing

    OpenAIRE

    Forder, Lewis

    2016-01-01

    This thesis explores whether colour terms (e.g., “red”, “blue”, “purple”, etc.) influence visual processing of colour, and if so, the time course of any effect. Broadly, this issue relates to debate concerning whether language affects the way we perceive the world (i.e., the theory of linguistic relativity). Three of the experiments conducted used the event-related potential method (ERP), taking electrophysiological measurements of visual processing and visual cognition in human participants....

  2. VisTool: A user interface and visualization development system

    DEFF Research Database (Denmark)

    Xu, Shangjin

    system – to simplify user interface development. VisTool allows user interface development without real programming. With VisTool a designer assembles visual objects (e.g. textboxes, ellipse, etc.) to visualize database contents. In VisTool, visual properties (e.g. color, position, etc.) can be formulas...... programming. However, in Software Engineering, software engineers who develop user interfaces do not follow it. In many cases, it is desirable to use graphical presentations, because a graphical presentation gives a better overview than text forms, and can improve task efficiency and user satisfaction....... However, it is more difficult to follow the classical usability approach for graphical presentation development. These difficulties result from the fact that designers cannot implement user interface with interactions and real data. We developed VisTool – a user interface and visualization development...

  3. Visualization tool. 3DAVS and polarization-type VR system

    International Nuclear Information System (INIS)

    Takeda, Yasuhiro; Ueshima, Yutaka

    2003-01-01

    In the visualization work of simulation data in every advanced research field, what is used most in the report or the presentation as a research result has still remained in the stages of the still picture or the 2-dimensional animation, in spite of recent abundance of various visualization software. With the recent progress of computational environment, however, more complicated phenomena can be so easily computed that the results are more needed to be comprehensible as well as intelligible. Therefore, it inevitably requires an animation rather than a still picture, or 3-dimensional display (virtual reality) rather than 2-dimensional one. In this report, two visualization tools, 3DAVS and Polarization-Type VR system are described as the data expression method after visualization processing. (author)

  4. Visual gravitational motion and the vestibular system in humans

    Directory of Open Access Journals (Sweden)

    Francesco eLacquaniti

    2013-12-01

    Full Text Available The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  5. Visual gravitational motion and the vestibular system in humans.

    Science.gov (United States)

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  6. Experience-independent development of the hamster circadian visual system.

    Directory of Open Access Journals (Sweden)

    August Kampf-Lassin

    2011-04-01

    Full Text Available Experience-dependent functional plasticity is a hallmark of the primary visual system, but it is not known if analogous mechanisms govern development of the circadian visual system. Here we investigated molecular, anatomical, and behavioral consequences of complete monocular light deprivation during extended intervals of postnatal development in Syrian hamsters. Hamsters were raised in constant darkness and opaque contact lenses were applied shortly after eye opening and prior to the introduction of a light-dark cycle. In adulthood, previously-occluded eyes were challenged with visual stimuli. Whereas image-formation and motion-detection were markedly impaired by monocular occlusion, neither entrainment to a light-dark cycle, nor phase-resetting responses to shifts in the light-dark cycle were affected by prior monocular deprivation. Cholera toxin-b subunit fluorescent tract-tracing revealed that in monocularly-deprived hamsters the density of fibers projecting from the retina to the suprachiasmatic nucleus (SCN was comparable regardless of whether such fibers originated from occluded or exposed eyes. In addition, long-term monocular deprivation did not attenuate light-induced c-Fos expression in the SCN. Thus, in contrast to the thalamocortical projections of the primary visual system, retinohypothalamic projections terminating in the SCN develop into normal adult patterns and mediate circadian responses to light largely independent of light experience during development. The data identify a categorical difference in the requirement for light input during postnatal development between circadian and non-circadian visual systems.

  7. Real-time visual effects for game programming

    CERN Document Server

    Kim, Chang-Hun; Kim, Soo-Kyun; Kang, Shin-Jin

    2015-01-01

    This book introduces the latest visual effects (VFX) techniques that can be applied to game programming. The usefulness of the physicality-based VFX techniques, such as water, fire, smoke, and wind, has been proven through active involvement and utilization in movies and images. However, they have yet to be extensively applied in the game industry, due to the high technical barriers. Readers of this book can learn not only the theories about the latest VFX techniques, but also the methodology of game programming, step by step. The practical VFX processing techniques introduced in this book will provide very helpful information to game programmers. Due to the lack of instructional books about VFX-related game programming, the demand for knowledge regarding these high-tech VFXs might be very high.

  8. Moderate perinatal thyroid hormone insufficiency alters visual system function in adult rats.

    Science.gov (United States)

    Boyes, William K; Degn, Laura; George, Barbara Jane; Gilbert, Mary E

    2018-04-21

    Thyroid hormone (TH) is critical for many aspects of neurodevelopment and can be disrupted by a variety of environmental contaminants. Sensory systems, including audition and vision are vulnerable to TH insufficiencies, but little data are available on visual system development at less than severe levels of TH deprivation. The goal of the current experiments was to explore dose-response relations between graded levels of TH insufficiency during development and the visual function of adult offspring. Pregnant Long Evans rats received 0 or 3 ppm (Experiment 1), or 0, 1, 2, or 3 ppm (Experiment 2) of propylthiouracil (PTU), an inhibitor of thyroid hormone synthesis, in drinking water from gestation day (GD) 6 to postnatal day (PN) 21. Treatment with PTU caused dose-related reductions of serum T4, with recovery on termination of exposure, and euthyroidism by the time of visual function testing. Tests of retinal (electroretinograms; ERGs) and visual cortex (visual evoked potentials; VEPs) function were assessed in adult offspring. Dark-adapted ERG a-waves, reflecting rod photoreceptors, were increased in amplitude by PTU. Light-adapted green flicker ERGs, reflecting M-cone photoreceptors, were reduced by PTU exposure. UV-flicker ERGs, reflecting S-cones, were not altered. Pattern-elicited VEPs were significantly reduced by 2 and 3 ppm PTU across a range of stimulus contrast values. The slope of VEP amplitude-log contrast functions was reduced by PTU, suggesting impaired visual contrast gain. Visual contrast gain primarily reflects function of visual cortex, and is responsible for adjusting sensitivity of perceptual mechanisms in response to changing visual scenes. The results indicate that moderate levels of pre-and post-natal TH insufficiency led to alterations in visual function of adult rats, including both retinal and visual cortex sites of dysfunction. Copyright © 2018. Published by Elsevier B.V.

  9. Real-time detection and discrimination of visual perception using electrocorticographic signals

    Science.gov (United States)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population

  10. Watch the lights. A visual communication system.

    Science.gov (United States)

    Rahtz, S K

    1989-01-01

    The trend for hospitals to market their emergency care services results in a greater demand on radiology departments, states Ms. Rahtz. Radiology must provide efficient service to both departments, even when it is difficult to predict patient flow in the emergency care center. Improved communication is the key, and a light system installed at Morton Plant Hospital is one alternative for solving the problem.

  11. Visuals and Visualisation of Human Body Systems

    Science.gov (United States)

    Mathai, Sindhu; Ramadas, Jayashree

    2009-01-01

    This paper explores the role of diagrams and text in middle school students' understanding and visualisation of human body systems. We develop a common framework based on structure and function to assess students' responses across diagram and verbal modes. Visualisation is defined in terms of understanding transformations on structure and relating…

  12. Online Voting System Based on Image Steganography and Visual Cryptography

    Directory of Open Access Journals (Sweden)

    Biju Issac

    2017-01-01

    Full Text Available This paper discusses the implementation of an online voting system based on image steganography and visual cryptography. The system was implemented in Java EE on a web-based interface, with MySQL database server and Glassfish application server as the backend. After considering the requirements of an online voting system, current technologies on electronic voting schemes in published literature were examined. Next, the cryptographic and steganography techniques best suited for the requirements of the voting system were chosen, and the software was implemented. We have incorporated in our system techniques like the password hashed based scheme, visual cryptography, F5 image steganography and threshold decryption cryptosystem. The analysis, design and implementation phase of the software development of the voting system is discussed in detail. We have also used a questionnaire survey and did the user acceptance testing of the system.

  13. Iterative development of visual control systems in a research vivarium.

    Science.gov (United States)

    Bassuk, James A; Washington, Ida M

    2014-01-01

    The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children's (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart's Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles

  14. Iterative development of visual control systems in a research vivarium.

    Directory of Open Access Journals (Sweden)

    James A Bassuk

    Full Text Available The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI methodology, a lean approach to management at Seattle Children's (Hospital, Research Institute, Foundation, would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart's Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and

  15. Main injector synchronous timing system

    International Nuclear Information System (INIS)

    Blokland, W.; Steimel, J.

    1998-01-01

    The Synchronous Timing System is designed to provide sub-nanosecond timing to instrumentation during the acceleration of particles in the Main Injector. Increased energy of the beam particles leads to a small but significant increase in speed, reducing the time it takes to complete a full turn of the ring by 61 nanoseconds (or more than 3 rf buckets). In contrast, the reference signal, used to trigger instrumentation and transmitted over a cable, has a constant group delay. This difference leads to a phase slip during the ramp and prevents instrumentation such as dampers from properly operating without additional measures. The Synchronous Timing System corrects for this phase slip as well as signal propagation time changes due to temperature variations. A module at the LLRF system uses a 1.2 Gbit/s G-Link chip to transmit the rf clock and digital data (e.g. the current frequency) over a single mode fiber around the ring. Fiber optic couplers at service buildings split off part of this signal for a local module which reconstructs a synchronous beam reference signal. This paper describes the background, design and expected performance of the Synchronous Timing System. copyright 1998 American Institute of Physics

  16. Main injector synchronous timing system

    International Nuclear Information System (INIS)

    Blokland, Willem; Steimel, James

    1998-01-01

    The Synchronous Timing System is designed to provide sub-nanosecond timing to instrumentation during the acceleration of particles in the Main Injector. Increased energy of the beam particles leads to a small but significant increase in speed, reducing the time it takes to complete a full turn of the ring by 61 nanoseconds (or more than 3 rf buckets). In contrast, the reference signal, used to trigger instrumentation and transmitted over a cable, has a constant group delay. This difference leads to a phase slip during the ramp and prevents instrumentation such as dampers from properly operating without additional measures. The Synchronous Timing System corrects for this phase slip as well as signal propagation time changes due to temperature variations. A module at the LLRF system uses a 1.2 Gbit/s G-Link chip to transmit the rf clock and digital data (e.g. the current frequency) over a single mode fiber around the ring. Fiber optic couplers at service buildings split off part of this signal for a local module which reconstructs a synchronous beam reference signal. This paper describes the background, design and expected performance of the Synchronous Timing System

  17. Satellite Imagery Assisted Road-Based Visual Navigation System

    Science.gov (United States)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  18. The time course of working memory effects on visual attention differs depending on memory type

    NARCIS (Netherlands)

    Dombrowe, I.; Olivers, C.N.L.; Donk, M.

    2010-01-01

    Previous work has generated inconsistent results with regard to what extent working memory (WM) content guides visual attention. Some studies found effects of easy to verbalize stimuli, whereas others only found an influence of visual memory content. To resolve this, we compared the time courses of

  19. Television Viewing at Home: Age Trends in Visual Attention and Time with TV.

    Science.gov (United States)

    Anderson, Daniel R.; And Others

    1986-01-01

    Decribes age trends in television viewing time and visual attention of children and adults videotaped in their homes for 10-day periods. Shows that the increase in visual attention to television during the preschool years is consistent with the theory that television program comprehensibility is a major determinant of attention in young children.…

  20. The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search.

    Science.gov (United States)

    Castel, Alan D; Pratt, Jay; Drummond, Emily

    2005-06-01

    The ability to efficiently search the visual environment is a critical function of the visual system, and recent research has shown that experience playing action video games can influence visual selective attention. The present research examined the similarities and differences between video game players (VGPs) and non-video game players (NVGPs) in terms of the ability to inhibit attention from returning to previously attended locations, and the efficiency of visual search in easy and more demanding search environments. Both groups were equally good at inhibiting the return of attention to previously cued locations, although VGPs displayed overall faster reaction times to detect targets. VGPs also showed overall faster response time for easy and difficult visual search tasks compared to NVGPs, largely attributed to faster stimulus-response mapping. The findings suggest that relative to NVGPs, VGPs rely on similar types of visual processing strategies but possess faster stimulus-response mappings in visual attention tasks.

  1. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    Science.gov (United States)

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  2. Measurement of Visual Reaction Times Using Hand-held Mobile Devices

    Science.gov (United States)

    Mulligan, Jeffrey B.; Arsintescu, Lucia; Flynn-Evans, Erin

    2015-01-01

    Modern mobile devices provide a convenient platform for collecting research data in the field. But,because the working of these devices is often cloaked behind multiple layers of proprietary system software, it can bedifficult to assess the accuracy of the data they produce, particularly in the case of timing. We have been collecting datain a simple visual reaction time experiment, as part of a fatigue testing protocol known as the Psychomotor Vigilance Test (PVT). In this protocol, subjects run a 5-minute block consisting of a sequence of trials in which a visual stimulus appears after an unpredictable variable delay. The subject is required to tap the screen as soon as possible after the appearance of the stimulus. In order to validate the reaction times reported by our program, we had subjects perform the task while a high-speed video camera recorded both the display screen, and a side view of the finger (observed in a mirror). Simple image-processing methods were applied to determine the frames in which the stimulus appeared and disappeared, and in which the finger made and broke contact with the screen. The results demonstrate a systematic delay between the initial contact by the finger and the detection of the touch by the software, having a value of 80 +- 20 milliseconds.

  3. Automatic Visualization of Software Requirements: Reactive Systems

    International Nuclear Information System (INIS)

    Castello, R.; Mili, R.; Tollis, I.G.; Winter, V.

    1999-01-01

    In this paper we present an approach that facilitates the validation of high consequence system requirements. This approach consists of automatically generating a graphical representation from an informal document. Our choice of a graphical notation is statecharts. We proceed in two steps: we first extract a hierarchical decomposition tree from a textual description, then we draw a graph that models the statechart in a hierarchical fashion. The resulting drawing is an effective requirements assessment tool that allows the end user to easily pinpoint inconsistencies and incompleteness

  4. Versatile timing system for MFTF

    International Nuclear Information System (INIS)

    Lau, N.H.C.

    1981-01-01

    This System consists of the Master Timing Transmitter and the Local Timing Receivers. The Master Timing Transmitter located in the control room initiates timing messages, abort messages and precise delay messages. A sync message is sent when one of the other three is not being sent. The Local Timing Receiver, located in the equipment area, decodes the incoming messages and generates 6 MHz, 3MHz and 1 MHz continuous clocks. A 250 KHz sync clock is derived from the sync messages, to which all pulse outputs are synchronized. The Local Timing Receiver also provides two ON-OFF delay counters of 64 bits each, and one OFF delay counter of 32 bits. Detection of abort messages and an out-of-sync signal will automatically disable all outputs

  5. The development of a visual system for the detection of obstructions for visually impaired people

    International Nuclear Information System (INIS)

    Okayasu, Mitsuhiro

    2009-01-01

    In this paper, the author presents a new visual system that can aid visually impaired people in walking. The system provides object information (that is, shape and location) through the sense of touch. This visual system depends on three different components: (i) an infrared camera sensor that detects the obstruction, (ii) a control system that measures the distance between the obstruction and the sensor, and (iii) a tooling apparatus with small pins (φ1 mm) used in forming a three-dimensional shape of the obstruction. The pins, arranged on a 6x6 matrix, move longitudinally between the retracted and extended positions based on the distance data. The pin extends individually, while the pin tip reflects the object's outer surface. The length of the pin from the base surface is proportional to the distance of the sensor from the obstruction. An ultrasonic actuator, controlled at a 15Hz frame rate, is the driving force for the pin movement. The tactile image of the 3D shape can provide information about the obstruction

  6. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task

    DEFF Research Database (Denmark)

    Fitzpatrick, C. M.; Caballero-Puntiverio, M.; Gether, U.

    2017-01-01

    Rationale The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen’s Theory of Visual Attention (TVA) to estimate visual processing speeds...... on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual...... modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. Results The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis...

  7. Altered visual information processing systems in bipolar disorder: evidence from visual MMN and P3

    Directory of Open Access Journals (Sweden)

    Toshihiko eMaekawa

    2013-07-01

    Full Text Available Objective: Mismatch negativity (MMN and P3 are unique ERP components that provide objective indices of human cognitive functions such as short-term memory and prediction. Bipolar disorder (BD is an endogenous psychiatric disorder characterized by extreme shifts in mood, energy, and ability to function socially. BD patients usually show cognitive dysfunction, and the goal of this study was to access their altered visual information processing via visual MMN (vMMN and P3 using windmill pattern stimuli.Methods: Twenty patients with BD and 20 healthy controls matched for age, gender, and handedness participated in this study. Subjects were seated in front of a monitor and listened to a story via earphones. Two types of windmill patterns (standard and deviant and white circle (target stimuli were randomly presented on the monitor. All stimuli were presented in random order at 200-ms durations with an 800-ms inter-stimulus interval. Stimuli were presented at 80% (standard, 10% (deviant, and 10% (target probabilities. The participants were instructed to attend to the story and press a button as soon as possible when the target stimuli were presented. Event-related potentials were recorded throughout the experiment using 128-channel EEG equipment. vMMN was obtained by subtracting standard from deviant stimuli responses, and P3 was evoked from the target stimulus.Results: Mean reaction times for target stimuli in the BD group were significantly higher than those in the control group. Additionally, mean vMMN-amplitudes and peak P3-amplitudes were significantly lower in the BD group than in controls.Conclusions: Abnormal vMMN and P3 in patients indicate a deficit of visual information processing in bipolar disorder, which is consistent with their increased reaction time to visual target stimuli.Significance: Both bottom-up and top-down visual information processing are likely altered in BD.

  8. Strength of figure-ground activity in monkey primary visual cortex predicts saccadic reaction time in a delayed detection task.

    Science.gov (United States)

    Supèr, Hans; Lamme, Victor A F

    2007-06-01

    When and where are decisions made? In the visual system a saccade, which is a fast shift of gaze toward a target in the visual scene, is the behavioral outcome of a decision. Current neurophysiological data and reaction time models show that saccadic reaction times are determined by a build-up of activity in motor-related structures, such as the frontal eye fields. These structures depend on the sensory evidence of the stimulus. Here we use a delayed figure-ground detection task to show that late modulated activity in the visual cortex (V1) predicts saccadic reaction time. This predictive activity is part of the process of figure-ground segregation and is specific for the saccade target location. These observations indicate that sensory signals are directly involved in the decision of when and where to look.

  9. A Simple Network Architecture Accounts for Diverse Reward Time Responses in Primary Visual Cortex.

    Science.gov (United States)

    Huertas, Marco A; Hussain Shuler, Marshall G; Shouval, Harel Z

    2015-09-16

    Many actions performed by animals and humans depend on an ability to learn, estimate, and produce temporal intervals of behavioral relevance. Exemplifying such learning of cued expectancies is the observation of reward-timing activity in the primary visual cortex (V1) of rodents, wherein neural responses to visual cues come to predict the time of future reward as behaviorally experienced in the past. These reward-timing responses exhibit significant heterogeneity in at least three qualitatively distinct classes: sustained increase or sustained decrease in firing rate until the time of expected reward, and a class of cells that reach a peak in firing at the expected delay. We elaborate upon our existing model by including inhibitory and excitatory units while imposing simple connectivity rules to demonstrate what role these inhibitory elements and the simple architectures play in sculpting the response dynamics of the network. We find that simply adding inhibition is not sufficient for obtaining the different distinct response classes, and that a broad distribution of inhibitory projections is necessary for obtaining peak-type responses. Furthermore, although changes in connection strength that modulate the effects of inhibition onto excitatory units have a strong impact on the firing rate profile of these peaked responses, the network exhibits robustness in its overall ability to predict the expected time of reward. Finally, we demonstrate how the magnitude of expected reward can be encoded at the expected delay in the network and how peaked responses express this reward expectancy. Heterogeneity in single-neuron responses is a common feature of neuronal systems, although sometimes, in theoretical approaches, it is treated as a nuisance and seldom considered as conveying a different aspect of a signal. In this study, we focus on the heterogeneous responses in the primary visual cortex of rodents trained with a predictable delayed reward time. We describe under what

  10. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    Science.gov (United States)

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  11. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    Directory of Open Access Journals (Sweden)

    Weiwei Kong

    2017-06-01

    Full Text Available [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  12. Coupling Retinal Scanning Displays to the Human Visual System: Visual System Response and Engineering Considerations

    National Research Council Canada - National Science Library

    Turner, Stuart

    2002-01-01

    A retinal scanning display (RSD) is a visual display that presents an image to an observer via a modulated beam of light that is directed through the eye's pupil and rapidly scanned in a raster-like pattern across the retina...

  13. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  14. A Dynamic Systems Theory Model of Visual Perception Development

    Science.gov (United States)

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  15. Simulation and Formal Analysis of Visual Attention in Cognitive Systems

    NARCIS (Netherlands)

    Bosse, T.; Maanen, P.P. van; Treur, J.

    2007-01-01

    In this paper a simulation model for visual attention is discussed and formally analysed. The model is part of the design of a cognitive system which comprises an agent that supports a naval officer in its task to compile a tactical picture of the situation in the field. A case study is described in

  16. Simple Smartphone-Based Guiding System for Visually Impaired People.

    Science.gov (United States)

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  17. Simple Smartphone-Based Guiding System for Visually Impaired People

    Directory of Open Access Journals (Sweden)

    Bor-Shing Lin

    2017-06-01

    Full Text Available Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  18. Visualizing project management: models and frameworks for mastering complex systems

    National Research Council Canada - National Science Library

    Forsberg, Kevin; Mooz, Hal; Cotterman, Howard

    2005-01-01

    ...- and beyond that on parameters such as return on investment, market acceptance, or sustainability. Anyone who has lived with the space program, or any other hightech industrial product development, can immediately appreciate this acclaimed book. It addresses and "visualizes" the multidimensional interactions of project management and systems engineering i...

  19. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  20. REAL TIME SYSTEM OPERATIONS 2006-2007

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Parashar, Manu; Lewis, Nancy Jo

    2008-08-15

    The Real Time System Operations (RTSO) 2006-2007 project focused on two parallel technical tasks: (1) Real-Time Applications of Phasors for Monitoring, Alarming and Control; and (2) Real-Time Voltage Security Assessment (RTVSA) Prototype Tool. The overall goal of the phasor applications project was to accelerate adoption and foster greater use of new, more accurate, time-synchronized phasor measurements by conducting research and prototyping applications on California ISO's phasor platform - Real-Time Dynamics Monitoring System (RTDMS) -- that provide previously unavailable information on the dynamic stability of the grid. Feasibility assessment studies were conducted on potential application of this technology for small-signal stability monitoring, validating/improving existing stability nomograms, conducting frequency response analysis, and obtaining real-time sensitivity information on key metrics to assess grid stress. Based on study findings, prototype applications for real-time visualization and alarming, small-signal stability monitoring, measurement based sensitivity analysis and frequency response assessment were developed, factory- and field-tested at the California ISO and at BPA. The goal of the RTVSA project was to provide California ISO with a prototype voltage security assessment tool that runs in real time within California ISO?s new reliability and congestion management system. CERTS conducted a technical assessment of appropriate algorithms, developed a prototype incorporating state-of-art algorithms (such as the continuation power flow, direct method, boundary orbiting method, and hyperplanes) into a framework most suitable for an operations environment. Based on study findings, a functional specification was prepared, which the California ISO has since used to procure a production-quality tool that is now a part of a suite of advanced computational tools that is used by California ISO for reliability and congestion management.

  1. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    Science.gov (United States)

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  2. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    Science.gov (United States)

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  3. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    Directory of Open Access Journals (Sweden)

    Kailun Yang

    2018-05-01

    Full Text Available Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.

  4. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    Science.gov (United States)

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  5. Psychophysical research progress of interocular suppression in amblyopic visual system

    Directory of Open Access Journals (Sweden)

    Jing-Jing Li

    2016-03-01

    Full Text Available Some recent animal experiments and psychophysical studies indicate that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression, and interocular suppression is a key mechanism in visual deficits experienced by patients with amblyopia. The aim of this review is to provide an overview of recent psychophysical findings that have investigated the important role of interocular suppression in amblyopia, the measurement and modulation of suppression, and new dichoptic treatment intervention that directly target suppression.

  6. Visual system evolution and the nature of the ancestral snake.

    Science.gov (United States)

    Simões, B F; Sampaio, F L; Jared, C; Antoniazzi, M M; Loew, E R; Bowmaker, J K; Rodriguez, A; Hart, N S; Hunt, D M; Partridge, J C; Gower, D J

    2015-07-01

    The dominant hypothesis for the evolutionary origin of snakes from 'lizards' (non-snake squamates) is that stem snakes acquired many snake features while passing through a profound burrowing (fossorial) phase. To investigate this, we examined the visual pigments and their encoding opsin genes in a range of squamate reptiles, focusing on fossorial lizards and snakes. We sequenced opsin transcripts isolated from retinal cDNA and used microspectrophotometry to measure directly the spectral absorbance of the photoreceptor visual pigments in a subset of samples. In snakes, but not lizards, dedicated fossoriality (as in Scolecophidia and the alethinophidian Anilius scytale) corresponds with loss of all visual opsins other than RH1 (λmax 490-497 nm); all other snakes (including less dedicated burrowers) also have functional sws1 and lws opsin genes. In contrast, the retinas of all lizards sampled, even highly fossorial amphisbaenians with reduced eyes, express functional lws, sws1, sws2 and rh1 genes, and most also express rh2 (i.e. they express all five of the visual opsin genes present in the ancestral vertebrate). Our evidence of visual pigment complements suggests that the visual system of stem snakes was partly reduced, with two (RH2 and SWS2) of the ancestral vertebrate visual pigments being eliminated, but that this did not extend to the extreme additional loss of SWS1 and LWS that subsequently occurred (probably independently) in highly fossorial extant scolecophidians and A. scytale. We therefore consider it unlikely that the ancestral snake was as fossorial as extant scolecophidians, whether or not the latter are para- or monophyletic. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  7. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  8. DEVELOPMENT OF A GEOGRAPHIC VISUALIZATION AND COMMUNICATIONS SYSTEMS (GVCS) FOR MONITORING REMOTE VEHICLES

    Energy Technology Data Exchange (ETDEWEB)

    COLEMAN, P.; DUNCAN, M.; DURFEE, R.C.; GOELTZ, R; HARRISON, G.; HODGSON, M.E.; KOOK, M.; MCCLAIN, S.

    1998-03-30

    The purpose of this project is to integrate a variety of geographic information systems capabilities and telecommunication technologies for potential use in geographic network and visualization applications. The specific technical goals of the project were to design, develop, and simulate the components of an audio/visual geographic communications system to aid future real-time monitoring, mapping and managing of transport vehicles. The system components of this feasibility study are collectively referred to as a Geographic Visualization and Communications System (GVCS). State-of-the-art techniques will be used and developed to allow both the vehicle operator and network manager to monitor the location and surrounding environment of a transport vehicle during shipment.

  9. Three-dimensional reconstruction and visualization system for medical images

    International Nuclear Information System (INIS)

    Preston, D.F.; Batnitzky, S.; Kyo Rak Lee; Cook, P.N.; Cook, L.T.; Dwyer, S.J.

    1982-01-01

    A three-dimensional reconstruction and visualization system could be of significant advantage in medical application such as neurosurgery and radiation treatment planning. The reconstructed anatomic structures from CT head scans could be used in a head stereotactic system to help plan the surgical procedure and the radiation treatment for a brain lesion. Also, the use of three-dimensional reconstruction algorithm provides for quantitative measures such as volume and surface area estimation of the anatomic features. This aspect of the three-dimensional reconstruction system may be used to monitor the progress or staging of a disease and the effects of patient treatment. Two cases are presented to illustrate the three-dimensional surface reconstruction and visualization system

  10. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  11. Preprint WebVRGIS Based Traffic Analysis and Visualization System

    OpenAIRE

    Li, Xiaoming; Lv, Zhihan; Wang, Weixi; Zhang, Baoyun; Hu, Jinxing; Yin, Ling; Feng, Shengzhong

    2015-01-01

    This is the preprint version of our paper on Advances in Engineering Software. With several characteristics, such as large scale, diverse predictability and timeliness, the city traffic data falls in the range of definition of Big Data. A Virtual Reality GIS based traffic analysis and visualization system is proposed as a promising and inspiring approach to manage and develop traffic big data. In addition to the basic GIS interaction functions, the proposed system also includes some intellige...

  12. Design and implementation of visualization methods for the CHANGES Spatial Decision Support System

    Science.gov (United States)

    Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan

    2014-05-01

    The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison

  13. Soldier-worn augmented reality system for tactical icon visualization

    Science.gov (United States)

    Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared

    2012-06-01

    This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.

  14. Time- and Space-Order Effects in Timed Discrimination of Brightness and Size of Paired Visual Stimuli

    Science.gov (United States)

    Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake

    2012-01-01

    Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…

  15. Coinduction in concurrent timed systems

    Czech Academy of Sciences Publication Activity Database

    Komenda, Jan

    2010-01-01

    Roč. 264, č. 2 (2010), s. 177-197 ISSN 1571-0661 Grant - others:EU Projekt(XE) EU.ICT.DISC 224498 Institutional research plan: CEZ:AV0Z10190503 Keywords : timed discrete-event systems * partial Mealy automata * functional stream calculus * synchronous composition Subject RIV: BA - General Mathematics http://www.sciencedirect.com/science/article/pii/S1571066110000794

  16. Recent development for the ITS code system: Parallel processing and visualization

    International Nuclear Information System (INIS)

    Fan, W.C.; Turner, C.D.; Halbleib, J.A. Sr.; Kensek, R.P.

    1996-01-01

    A brief overview is given for two software developments related to the ITS code system. These developments provide parallel processing and visualization capabilities and thus allow users to perform ITS calculations more efficiently. Timing results and a graphical example are presented to demonstrate these capabilities

  17. Testing geoscience data visualization systems for geological mapping and training

    Science.gov (United States)

    Head, J. W.; Huffman, J. N.; Forsberg, A. S.; Hurwitz, D. M.; Basilevsky, A. T.; Ivanov, M. A.; Dickson, J. L.; Senthil Kumar, P.

    2008-09-01

    ADVISER (ADvanced VIsualization for Solar system Exploration) [1,2] as a tool for taking planetary geologists virtually "into the field" in the IVR Cave environment in support of several scientific themes and have assessed its application to geological mapping of Venus. ADVISER aims to create a field experience by integrating multiple data sources and presenting them as a unified environment to the scientist. Additionally, we have developed a virtual field kit, tailored to supporting research tasks dictated by scientific and mapping themes. Technically, ADVISER renders high-resolution topographic and image datasets (8192x8192 samples) in stereo at interactive frame-rates (25+ frames-per-second). The system is based on a state-of-the-art terrain rendering system and is highly interactive; for example, vertical exaggeration, lighting geometry, image contrast, and contour lines can be modified by the user in real time. High-resolution image data can be overlaid on the terrain and other data can be rendered in this context. A detailed description and case studies of ADVISER are available.

  18. Fermi Timing and Synchronization System

    International Nuclear Information System (INIS)

    Wilcox, R.; Staples, J.; Doolittle, L.; Byrd, J.; Ratti, A.; Kaertner, F.X.; Kim, J.; Chen, J.; Ilday, F.O.; Ludwig, F.; Winter, A.; Ferianis, M.; Danailov, M.; D'Auria, G.

    2006-01-01

    The Fermi FEL will depend critically on precise timing of its RF, laser and diagnostic subsystems. The timing subsystem to coordinate these functions will need to reliably maintain sub-100fs synchronicity between distant points up to 300m apart in the Fermi facility. The technology to do this is not commercially available, and has not been experimentally demonstrated in a working facility. Therefore, new technology must be developed to meet these needs. Two approaches have been researched by different groups working with the Fermi staff. At MIT, a pulse transmission scheme has been developed for synchronization of RF and laser devices. And at LBL, a CW transmission scheme has been developed for RF and laser synchronization. These respective schemes have advantages and disadvantages that will become better understood in coming years. This document presents the work done by both teams, and suggests a possible system design which integrates them both. The integrated system design provides an example of how choices can be made between the different approaches without significantly changing the basic infrastructure of the system. Overall system issues common to any synchronization scheme are also discussed

  19. Fermi Timing and Synchronization System

    Energy Technology Data Exchange (ETDEWEB)

    Wilcox, R.; Staples, J.; Doolittle, L.; Byrd, J.; Ratti, A.; Kaertner, F.X.; Kim, J.; Chen, J.; Ilday, F.O.; Ludwig, F.; Winter, A.; Ferianis, M.; Danailov, M.; D' Auria, G.

    2006-07-19

    The Fermi FEL will depend critically on precise timing of its RF, laser and diagnostic subsystems. The timing subsystem to coordinate these functions will need to reliably maintain sub-100fs synchronicity between distant points up to 300m apart in the Fermi facility. The technology to do this is not commercially available, and has not been experimentally demonstrated in a working facility. Therefore, new technology must be developed to meet these needs. Two approaches have been researched by different groups working with the Fermi staff. At MIT, a pulse transmission scheme has been developed for synchronization of RF and laser devices. And at LBL, a CW transmission scheme has been developed for RF and laser synchronization. These respective schemes have advantages and disadvantages that will become better understood in coming years. This document presents the work done by both teams, and suggests a possible system design which integrates them both. The integrated system design provides an example of how choices can be made between the different approaches without significantly changing the basic infrastructure of the system. Overall system issues common to any synchronization scheme are also discussed.

  20. Auditory and visual reaction time and peripheral field of vision in helmet users

    Directory of Open Access Journals (Sweden)

    Abbupillai Adhilakshmi

    2016-12-01

    Full Text Available Background: The incidence of fatal accidents are more in two wheeler drivers compared to four wheeler drivers. Head injury is of serious concern when recovery and prognosis of the patients are warranted, helmets are being used for safety purposes by moped, scooters and motorcycle drivers. Although, helmets are designed with cushioning effect to prevent head injuries but there are evidences of increase risk of neck injuries and reduced peripheral vision and hearing in helmet users. A complete full coverage helmets provide about less than 3 percent restrictions in horizontal peripheral visual field compared to rider without helmet. The standard company patented ergonomically designed helmets which does not affect the peripheral vision neither auditory reaction time. Objective: This pilot study aimed to evaluate the peripheral field of vision and auditory and visual reaction time in a hypertensive, diabetic and healthy male and female in order to have a better insight of protective characteristics of helmet in health and disease. Method: This pilot study carried out on age matched male of one healthy, one hypertensive and one diabetic and female subject of one healthy, one hypertensive and one diabetics. The field of vision was assessed by Lister’s perimeter whereas auditory and visual reaction time was recorded with response analyser. Result : Gender difference was not noted in peripheral field of vision but mild difference was found in auditory reaction time for high frequency and visual reaction time for both red and green colour in healthy control. But lateral and downward peripheral visual field was found reduced whereas auditory and visual reaction time was found increased in both hypertensive and diabetic subject in both sexes. Conclusion: Peripheral vision, auditory reaction time and visual reaction time in hypertensive and diabetics may lead to vulnerable accident. Helmet use has proven to reduce extent of injury in motorcyclist and

  1. Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system

    KAUST Repository

    Hollt, Thomas

    2015-01-15

    We present a novel integrated visualization system that enables the interactive visual analysis of ensemble simulations and estimates of the sea surface height and other model variables that are used for storm surge prediction. Coastal inundation, caused by hurricanes and tropical storms, poses large risks for today\\'s societies. High-fidelity numerical models of water levels driven by hurricane-force winds are required to predict these events, posing a challenging computational problem, and even though computational models continue to improve, uncertainties in storm surge forecasts are inevitable. Today, this uncertainty is often exposed to the user by running the simulation many times with different parameters or inputs following a Monte-Carlo framework in which uncertainties are represented as stochastic quantities. This results in multidimensional, multivariate and multivalued data, so-called ensemble data. While the resulting datasets are very comprehensive, they are also huge in size and thus hard to visualize and interpret. In this paper, we tackle this problem by means of an interactive and integrated visual analysis system. By harnessing the power of modern graphics processing units for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations to highlight the uncertainty or show the complete distribution of the simulations at user-defined positions over the complete time series of the prediction. We highlight the benefits of our system by presenting its application in a real-world scenario using a simulation of Hurricane Ike.

  2. Real-time visualization of dynamic particle contact failures

    Energy Technology Data Exchange (ETDEWEB)

    Parab, Niranjan D.; Hudspeth, Matthew; Claus, Ben; Guo, Zherui; Sun, Tao; Fezzaa, Kamel; Chen, Weinong W.

    2017-01-01

    Granular materials are widely used to resist impact and blast. Under these dynamic loadings, the constituent particles in the granular system fracture. To study the fracture mechanisms in brittle particles under dynamic compressive loading, a high speed X-ray phase contrast imaging setup was synchronized with a Kolsky bar apparatus. Controlled compressive loading was applied on two contacting particles using the Kolsky bar apparatus and fracture process was captured using the high speed X-ray imaging setup. Five different particles were investigated: soda-lime glass, polycrystalline silica (silicon dioxide), polycrystalline silicon, barium titanate glass, and yttrium stabilized zirconia. For both soda lime glass and polycrystalline silica particles, one of the particles fragmented explosively, thus breaking into many small pieces. For Silicon and barium titanate glass particles, a finite number of cracks were observed in one of the particles causing it to fracture. For yttrium stabilized zirconia particles, a single meridonial crack developed in one of the particles, breaking it into two parts.

  3. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    Science.gov (United States)

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Dense time discretization technique for verification of real time systems

    International Nuclear Information System (INIS)

    Makackas, Dalius; Miseviciene, Regina

    2016-01-01

    Verifying the real-time system there are two different models to control the time: discrete and dense time based models. This paper argues a novel verification technique, which calculates discrete time intervals from dense time in order to create all the system states that can be reached from the initial system state. The technique is designed for real-time systems specified by a piece-linear aggregate approach. Key words: real-time system, dense time, verification, model checking, piece-linear aggregate

  5. Real-time analysis, visualization, and steering of microtomography experiments at photon sources

    International Nuclear Information System (INIS)

    Laszeski, G. von; Insley, J.A.; Foster, I.; Bresnahan, J.; Kesselman, C.; Su, M.; Thiebaux, M.; Rivers, M.L.; Wang, S.; Tieman, B.; McNulty, I.

    2000-01-01

    A new generation of specialized scientific instruments called synchrotron light sources allow the imaging of materials at very fine scales. However, in contrast to a traditional microscope, interactive use has not previously been possible because of the large amounts of data generated and the considerable computation required translating this data into a useful image. The authors describe a new software architecture that uses high-speed networks and supercomputers to enable quasi-real-time and hence interactive analysis of synchrotron light source data. This architecture uses technologies provided by the Globus computational grid toolkit to allow dynamic creation of a reconstruction pipeline that transfers data from a synchrotron source beamline to a preprocessing station, next to a parallel reconstruction system, and then to multiple visualization stations. Collaborative analysis tools allow multiple users to control data visualization. As a result, local and remote scientists can see and discuss preliminary results just minutes after data collection starts. The implications for more efficient use of this scarce resource and for more effective science appear tremendous

  6. A visual representation system for the scheduling and management of projects

    NARCIS (Netherlands)

    Pollalis, S.N.

    1992-01-01

    A VISUAL SCHEDULING AND MANAGEMENT SYSTEM (VSMS) This work proposes a new system for the visual representation of projects that displays the quantities of work, resources and cost. This new system, called Visual Scheduling and Management System, has a built-in hierarchical system to provide

  7. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    Science.gov (United States)

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  8. Mobile real time radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, J.; Taggart, D.; Betts, S. [Los Alamos National Lab., NM (United States)] [and others

    1997-11-01

    A 450-keV Mobile Real Time Radiography (RTR) System was delivered to Los Alamos National Laboratory (LANL) in January 1996. It was purchased to inspect containers of radioactive waste produced at (LANL). Since its delivery it has been used to radiograph more than 600 drums of radioactive waste at various LANL sites. It has the capability of inspecting waste containers of various sizes from <1-gal. buckets up to standard waste boxes (SWB, dimensions 54.5 in. x 71 in. x 37 in.). It has three independent x-ray acquisition formats. The primary system used is a 12- in. image intensifier, the second is a 36-in. linear diode array (LDA) and the last is an open system. It is fully self contained with on board generator, HVAC, and a fire suppression system. It is on a 53-ft long x 8-ft. wide x 14-ft. high trailer that can be moved over any highway requiring only an easily obtainable overweight permit because it weights {approximately}38 tons. It was built to conform to industry standards for a cabinet system which does not require an exclusion zone. The fact that this unit is mobile has allowed us to operate where the waste is stored, rather than having to move the waste to a fixed facility.

  9. Space-Time Reference Systems

    CERN Document Server

    Soffel, Michael

    2013-01-01

    The high accuracy of modern astronomical spatial-temporal reference systems has made them considerably complex. This book offers a comprehensive overview of such systems. It begins with a discussion of ‘The Problem of Time’, including recent developments in the art of clock making (e.g., optical clocks) and various time scales. The authors address  the definitions and realization of spatial coordinates by reference to remote celestial objects such as quasars. After an extensive treatment of classical equinox-based coordinates, new paradigms for setting up a celestial reference system are introduced that no longer refer to the translational and rotational motion of the Earth. The role of relativity in the definition and realization of such systems is clarified. The topics presented in this book are complemented by exercises (with solutions). The authors offer a series of files, written in Maple, a standard computer algebra system, to help readers get a feel for the various models and orders of magnitude. ...

  10. Mobile real time radiography system

    International Nuclear Information System (INIS)

    Vigil, J.; Taggart, D.; Betts, S.

    1997-01-01

    A 450-keV Mobile Real Time Radiography (RTR) System was delivered to Los Alamos National Laboratory (LANL) in January 1996. It was purchased to inspect containers of radioactive waste produced at (LANL). Since its delivery it has been used to radiograph more than 600 drums of radioactive waste at various LANL sites. It has the capability of inspecting waste containers of various sizes from <1-gal. buckets up to standard waste boxes (SWB, dimensions 54.5 in. x 71 in. x 37 in.). It has three independent x-ray acquisition formats. The primary system used is a 12- in. image intensifier, the second is a 36-in. linear diode array (LDA) and the last is an open system. It is fully self contained with on board generator, HVAC, and a fire suppression system. It is on a 53-ft long x 8-ft. wide x 14-ft. high trailer that can be moved over any highway requiring only an easily obtainable overweight permit because it weights ∼38 tons. It was built to conform to industry standards for a cabinet system which does not require an exclusion zone. The fact that this unit is mobile has allowed us to operate where the waste is stored, rather than having to move the waste to a fixed facility

  11. A real time monitoring system

    International Nuclear Information System (INIS)

    Fontanini, Horacio; Galdoz, Erwin

    1989-01-01

    A real time monitoring system is described. It was initially developed to be used as a man-machine interface between a basic principles simulator of the Embalse Nuclear Power Plant and the operators. This simulator is under construction at the Bariloche Atomic Center's Process Control Division. Due to great design flexibility, this system can also be used in real plants. The system is designed to be run on a PC XT or AT personal computer with high resolution graphics capabilities. Three interrelated programs compose the system: 1) Graphics Editor, to build static image to be used as a reference frame where to show dynamically updated data. 2) Data acquisition and storage module. It is a memory resident module to acquire and store data in background. Data can be acquired and stored without interference with the operating system, via serial port or through analog-to-digital converter attached to the personal computer. 3) Display module. It shows the acquired data according to commands received from configuration files prepared by the operator. (Author) [es

  12. Traveling with blindness: A qualitative space-time approach to understanding visual impairment and urban mobility.

    Science.gov (United States)

    Wong, Sandy

    2018-01-01

    This paper draws from Hägerstrand's space-time framework to generate new insights on the everyday mobilities of individuals with visual impairments in the San Francisco Bay Area. While existing research on visual impairment and mobility emphasizes individual physical limitations resulting from vision loss or inaccessible public spaces, this article highlights and bridges both the behavioral and social processes that influence individual mobility. A qualitative analysis of sit-down and mobile interview data reveals that the space-time constraints of people with visual impairments are closely linked to their access to transportation, assistive technologies, and mobile devices. The findings deepen our understandings of the relationship between health and mobility, and present intervention opportunities for improving the quality of life for people with visual impairment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Design and implementation of visual inspection system handed in tokamak flexible in-vessel robot

    International Nuclear Information System (INIS)

    Wang, Hesheng; Xu, Lifei; Chen, Weidong

    2016-01-01

    In-vessel viewing system (IVVS) is a fundamental tool among the remote handling systems for ITER, which is used to providing information on the status of the in-vessel components. The basic functional requirement of in-vessel visual inspection system is to perform a fast intervention with adequate optical resolution. In this paper, we present the software and hardware solution, which is designed and implemented for tokamak in-vessel viewing system that installed on end-effector of flexible in-vessel robot working under vacuum and high temperature. The characteristic of our in-vessel viewing system consists of two parts: binocular heterogeneous vision inspection tool and first wall scene emersion based augment virtuality. The former protected with water-cooled shield is designed to satisfy the basic functional requirement of visual inspection system, which has the capacity of large field of view and high-resolution for detection precision. The latter, achieved by overlaying first wall tiles images onto virtual first wall scene model in 3D virtual reality simulation system, is designed for convenient, intuitive and realistic-looking visual inspection instead of viewing the status of first wall only by real-time monitoring or off-line images sequences. We present the modular division of system, each of them in smaller detail, and go through some of the design choices according to requirements of in-vessel visual inspection task.

  14. Design and implementation of visual inspection system handed in tokamak flexible in-vessel robot

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng; Xu, Lifei [Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Key Laboratory of System Control and Information Processing, Ministry of Education of China (China)

    2016-05-15

    In-vessel viewing system (IVVS) is a fundamental tool among the remote handling systems for ITER, which is used to providing information on the status of the in-vessel components. The basic functional requirement of in-vessel visual inspection system is to perform a fast intervention with adequate optical resolution. In this paper, we present the software and hardware solution, which is designed and implemented for tokamak in-vessel viewing system that installed on end-effector of flexible in-vessel robot working under vacuum and high temperature. The characteristic of our in-vessel viewing system consists of two parts: binocular heterogeneous vision inspection tool and first wall scene emersion based augment virtuality. The former protected with water-cooled shield is designed to satisfy the basic functional requirement of visual inspection system, which has the capacity of large field of view and high-resolution for detection precision. The latter, achieved by overlaying first wall tiles images onto virtual first wall scene model in 3D virtual reality simulation system, is designed for convenient, intuitive and realistic-looking visual inspection instead of viewing the status of first wall only by real-time monitoring or off-line images sequences. We present the modular division of system, each of them in smaller detail, and go through some of the design choices according to requirements of in-vessel visual inspection task.

  15. A Novel Visual Data Mining Module for the Geographical Information System gvSIG

    Directory of Open Access Journals (Sweden)

    Romel Vázquez-Rodríguez

    2013-01-01

    Full Text Available The exploration of large GIS models containing spatio-temporal information is a challenge. In this paper we propose the integration of scientific visualization (ScVis techniques into geographic information systems (GIS as an alternative for the visual analysis of data. Providing GIS with such tools improves the analysis and understanding of datasets with very low spatial density and allows to find correlations between variables in time and space. In this regard, we present a new visual data mining tool for the GIS gvSIG. This tool has been implemented as a gvSIG module and contains several ScVis techniques for multiparameter data with a wide range of possibilities to explore interactively the data. The developed module is a powerful visual data mining and data visualization tool to obtain knowledge from multiple datasets in time and space. A real case study with meteorological data from Villa Clara province (Cuba is presented, where the implemented visualization techniques were used to analyze the available datasets. Although it is tested with meteorological data, the developed module is of general application in the sense that it can be used in multiple application fields related with Earth Sciences.

  16. Macular Carotenoid Supplementation Improves Visual Performance, Sleep Quality, and Adverse Physical Symptoms in Those with High Screen Time Exposure.

    Science.gov (United States)

    Stringham, James M; Stringham, Nicole T; O'Brien, Kevin J

    2017-06-29

    The dramatic rise in the use of smartphones, tablets, and laptop computers over the past decade has raised concerns about potentially deleterious health effects of increased "screen time" (ST) and associated short-wavelength (blue) light exposure. We determined baseline associations and effects of 6 months' supplementation with the macular carotenoids (MC) lutein, zeaxanthin, and mesozeaxanthin on the blue-absorbing macular pigment (MP) and measures of sleep quality, visual performance, and physical indicators of excessive ST. Forty-eight healthy young adults with at least 6 h of daily near-field ST exposure participated in this placebo-controlled trial. Visual performance measures included contrast sensitivity, critical flicker fusion, disability glare, and photostress recovery. Physical indicators of excessive screen time and sleep quality were assessed via questionnaire. MP optical density (MPOD) was assessed via heterochromatic flicker photometry. At baseline, MPOD was correlated significantly with all visual performance measures ( p eye strain, eye fatigue, and all visual performance measures, versus placebo ( p < 0.05 for all). Increased MPOD significantly improves visual performance and, in turn, improves several undesirable physical outcomes associated with excessive ST. The improvement in sleep quality was not directly related to increases in MPOD, and may be due to systemic reduction in oxidative stress and inflammation.

  17. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    Science.gov (United States)

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience

  18. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  19. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory

  20. Real-time visual biofeedback during weight bearing improves therapy compliance in patients following lower extremity fractures.

    Science.gov (United States)

    Raaben, Marco; Holtslag, Herman R; Leenen, Luke P H; Augustine, Robin; Blokhuis, Taco J

    2018-01-01

    Individuals with lower extremity fractures are often instructed on how much weight to bear on the affected extremity. Previous studies have shown limited therapy compliance in weight bearing during rehabilitation. In this study we investigated the effect of real-time visual biofeedback on weight bearing in individuals with lower extremity fractures in two conditions: full weight bearing and touch-down weight bearing. 11 participants with full weight bearing and 12 participants with touch-down weight bearing after lower extremity fractures have been measured with an ambulatory biofeedback system. The participants first walked 15m and the biofeedback system was only used to register the weight bearing. The same protocol was then repeated with real-time visual feedback during weight bearing. The participants could thereby adapt their loading to the desired level and improve therapy compliance. In participants with full weight bearing, real-time visual biofeedback resulted in a significant increase in loading from 50.9±7.51% bodyweight (BW) without feedback to 63.2±6.74%BW with feedback (P=0.0016). In participants with touch-down weight bearing, the exerted lower extremity load decreased from 16.7±9.77kg without feedback to 10.27±4.56kg with feedback (P=0.0718). More important, the variance between individual steps significantly decreased after feedback (P=0.018). Ambulatory monitoring weight bearing after lower extremity fractures showed that therapy compliance is low, both in full and touch-down weight bearing. Real-time visual biofeedback resulted in significantly higher peak loads in full weight bearing and increased accuracy of individual steps in touch-down weight bearing. Real-time visual biofeedback therefore results in improved therapy compliance after lower extremity fractures. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Automated visual inspection system based on HAVNET architecture

    Science.gov (United States)

    Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.

    1994-10-01

    In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.

  2. Relational time in anyonic systems

    Science.gov (United States)

    Nikolova, A.; Brennen, G. K.; Osborne, T. J.; Milburn, G. J.; Stace, T. M.

    2018-03-01

    In a seminal paper [Phys. Rev. D 27, 2885 (1983), 10.1103/PhysRevD.27.2885], Page and Wootters suggest that time evolution could be described solely in terms of correlations between systems and clocks, as a means of dealing with the "problem of time" stemming from vanishing Hamiltonian dynamics in many theories of quantum gravity. Their approach seeks to identify relational dynamics given a Hamiltonian constraint on the physical states. Here we present a "state-centric" reformulation of the Page and Wootters model better suited to cases where the Hamiltonian constraint is satisfied, such as anyons emerging in Chern-Simons theories. We describe relational time by encoding logical "clock" qubits into topologically protected anyonic degrees of freedom. The minimum temporal increment of such anyonic clocks is determined by the universality of the anyonic braid group, with nonuniversal models naturally exhibiting discrete time. We exemplify this approach by using SU (2) 2 anyons and discuss generalizations to other states and models.

  3. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  4. Designing visual displays and system models for safe reactor operations

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S.A.

    1995-01-01

    The material presented in this paper is based on two studies involving the design of visual displays and the user's prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator's perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors

  5. Urban Space Explorer: A Visual Analytics System for Urban Planning.

    Science.gov (United States)

    Karduni, Alireza; Cho, Isaac; Wessel, Ginette; Ribarsky, William; Sauda, Eric; Dou, Wenwen

    2017-01-01

    Understanding people's behavior is fundamental to many planning professions (including transportation, community development, economic development, and urban design) that rely on data about frequently traveled routes, places, and social and cultural practices. Based on the results of a practitioner survey, the authors designed Urban Space Explorer, a visual analytics system that utilizes mobile social media to enable interactive exploration of public-space-related activity along spatial, temporal, and semantic dimensions.

  6. A Global System for Transportation Simulation and Visualization in Emergency Evacuation Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei [ORNL; Liu, Cheng [ORNL; Thomas, Neil [ORNL; Bhaduri, Budhendra L [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2015-01-01

    Simulation-based studies are frequently used for evacuation planning and decision making processes. Given the transportation systems complexity and data availability, most evacuation simulation models focus on certain geographic areas. With routine improvement of OpenStreetMap road networks and LandScanTM global population distribution data, we present WWEE, a uniform system for world-wide emergency evacuation simulations. WWEE uses unified data structure for simulation inputs. It also integrates a super-node trip distribution model as the default simulation parameter to improve the system computational performance. Two levels of visualization tools are implemented for evacuation performance analysis, including link-based macroscopic visualization and vehicle-based microscopic visualization. For left-hand and right-hand traffic patterns in different countries, the authors propose a mirror technique to experiment with both scenarios without significantly changing traffic simulation models. Ten cities in US, Europe, Middle East, and Asia are modeled for demonstration. With default traffic simulation models for fast and easy-to-use evacuation estimation and visualization, WWEE also retains the capability of interactive operation for users to adopt customized traffic simulation models. For the first time, WWEE provides a unified platform for global evacuation researchers to estimate and visualize their strategies performance of transportation systems under evacuation scenarios.

  7. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  8. Candidate glutamatergic neurons in the visual system of Drosophila.

    Directory of Open Access Journals (Sweden)

    Shamprasad Varija Raghu

    Full Text Available The visual system of Drosophila contains approximately 60,000 neurons that are organized in parallel, retinotopically arranged columns. A large number of these neurons have been characterized in great anatomical detail. However, studies providing direct evidence for synaptic signaling and the neurotransmitter used by individual neurons are relatively sparse. Here we present a first layout of neurons in the Drosophila visual system that likely release glutamate as their major neurotransmitter. We identified 33 different types of neurons of the lamina, medulla, lobula and lobula plate. Based on the previous Golgi-staining analysis, the identified neurons are further classified into 16 major subgroups representing lamina monopolar (L, transmedullary (Tm, transmedullary Y (TmY, Y, medulla intrinsic (Mi, Mt, Pm, Dm, Mi Am, bushy T (T, translobula plate (Tlp, lobula intrinsic (Lcn, Lt, Li, lobula plate tangential (LPTCs and lobula plate intrinsic (LPi cell types. In addition, we found 11 cell types that were not described by the previous Golgi analysis. This classification of candidate glutamatergic neurons fosters the future neurogenetic dissection of information processing in circuits of the fly visual system.

  9. A computer graphics system for visualizing spacecraft in orbit

    Science.gov (United States)

    Eyles, Don E.

    1989-01-01

    To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.

  10. Real-time analytics techniques to analyze and visualize streaming data

    CERN Document Server

    Ellis, Byron

    2014-01-01

    Construct a robust end-to-end solution for analyzing and visualizing streaming data Real-time analytics is the hottest topic in data analytics today. In Real-Time Analytics: Techniques to Analyze and Visualize Streaming Data, expert Byron Ellis teaches data analysts technologies to build an effective real-time analytics platform. This platform can then be used to make sense of the constantly changing data that is beginning to outpace traditional batch-based analysis platforms. The author is among a very few leading experts in the field. He has a prestigious background in research, development,

  11. Development of the updated system of city underground pipelines based on Visual Studio

    Science.gov (United States)

    Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong

    2009-10-01

    Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.

  12. Designing and visualizing the water-energy-food nexus system

    Science.gov (United States)

    Endo, A.; Kumazawa, T.; Yamada, M.; Kato, T.

    2017-12-01

    The objective of this study is to design and visualize a water-energy-food nexus system to identify the interrelationships between water-energy-food (WEF) resources and to understand the subsequent complexity of WEF nexus systems holistically, taking an interdisciplinary approach. Object-oriented concepts and ontology engineering methods were applied according to the hypothesis that the chains of changes in linkages between water, energy, and food resources holistically affect the water-energy-food nexus system, including natural and social systems, both temporally and spatially. The water-energy-food nexus system that is developed is significant because it allows us to: 1) visualize linkages between water, energy, and food resources in social and natural systems; 2) identify tradeoffs between these resources; 3) find a way of using resources efficiently or enhancing the synergy between the utilization of different resources; and 4) aid scenario planning using economic tools. The paper also discusses future challenges for applying the developed water-energy-food nexus system in other areas.

  13. LongLine: Visual Analytics System for Large-scale Audit Logs

    Directory of Open Access Journals (Sweden)

    Seunghoon Yoo

    2018-03-01

    Full Text Available Audit logs are different from other software logs in that they record the most primitive events (i.e., system calls in modern operating systems. Audit logs contain a detailed trace of an operating system, and thus have received great attention from security experts and system administrators. However, the complexity and size of audit logs, which increase in real time, have hindered analysts from understanding and analyzing them. In this paper, we present a novel visual analytics system, LongLine, which enables interactive visual analyses of large-scale audit logs. LongLine lowers the interpretation barrier of audit logs by employing human-understandable representations (e.g., file paths and commands instead of abstract indicators of operating systems (e.g., file descriptors as well as revealing the temporal patterns of the logs in a multi-scale fashion with meaningful granularity of time in mind (e.g., hourly, daily, and weekly. LongLine also streamlines comparative analysis between interesting subsets of logs, which is essential in detecting anomalous behaviors of systems. In addition, LongLine allows analysts to monitor the system state in a streaming fashion, keeping the latency between log creation and visualization less than one minute. Finally, we evaluate our system through a case study and a scenario analysis with security experts.

  14. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  15. Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation

    Science.gov (United States)

    Busler, Jessica N.; Lazarte, Alejandro A.

    2017-01-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…

  16. Effect of Nicotine on Audio and Visual Reaction Time in Dipping ...

    African Journals Online (AJOL)

    Nicotine through blood is harmful and as there are fewer studies in India with respect to nicotines influence on reaction time especially in the smokeless tobacco users we studied this. Reaction time is a measure of the sensorimotor integration in a person. We used a PC 1000 Hz reaction timer to record the audio and visual ...

  17. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  18. The modulation of simple reaction time by the spatial probability of a visual stimulus

    Directory of Open Access Journals (Sweden)

    Carreiro L.R.R.

    2003-01-01

    Full Text Available Simple reaction time (SRT in response to visual stimuli can be influenced by many stimulus features. The speed and accuracy with which observers respond to a visual stimulus may be improved by prior knowledge about the stimulus location, which can be obtained by manipulating the spatial probability of the stimulus. However, when higher spatial probability is achieved by holding constant the stimulus location throughout successive trials, the resulting improvement in performance can also be due to local sensory facilitation caused by the recurrent spatial location of a visual target (position priming. The main objective of the present investigation was to quantitatively evaluate the modulation of SRT by the spatial probability structure of a visual stimulus. In two experiments the volunteers had to respond as quickly as possible to the visual target presented on a computer screen by pressing an optic key with the index finger of the dominant hand. Experiment 1 (N = 14 investigated how SRT changed as a function of both the different levels of spatial probability and the subject's explicit knowledge about the precise probability structure of visual stimulation. We found a gradual decrease in SRT with increasing spatial probability of a visual target regardless of the observer's previous knowledge concerning the spatial probability of the stimulus. Error rates, below 2%, were independent of the spatial probability structure of the visual stimulus, suggesting the absence of a speed-accuracy trade-off. Experiment 2 (N = 12 examined whether changes in SRT in response to a spatially recurrent visual target might be accounted for simply by sensory and temporally local facilitation. The findings indicated that the decrease in SRT brought about by a spatially recurrent target was associated with its spatial predictability, and could not be accounted for solely in terms of sensory priming.

  19. Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives

    Science.gov (United States)

    Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.

    2017-12-01

    During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.

  20. D Reconstruction and Visualization of Cultural Heritage: Analyzing Our Legacy Through Time

    Science.gov (United States)

    Rodríguez-Gonzálvez, P.; Muñoz-Nieto, A. L.; del Pozo, S.; Sanchez-Aparicio, L. J.; Gonzalez-Aguilera, D.; Micoli, L.; Gonizzi Barsanti, S.; Guidi, G.; Mills, J.; Fieber, K.; Haynes, I.; Hejmanowska, B.

    2017-02-01

    Temporal analyses and multi-temporal 3D reconstruction are fundamental for the preservation and maintenance of all forms of Cultural Heritage (CH) and are the basis for decisions related to interventions and promotion. Introducing the fourth dimension of time into three-dimensional geometric modelling of real data allows the creation of a multi-temporal representation of a site. In this way, scholars from various disciplines (surveyors, geologists, archaeologists, architects, philologists, etc.) are provided with a new set of tools and working methods to support the study of the evolution of heritage sites, both to develop hypotheses about the past and to model likely future developments. The capacity to "see" the dynamic evolution of CH assets across different spatial scales (e.g. building, site, city or territory) compressed in diachronic model, affords the possibility to better understand the present status of CH according to its history. However, there are numerous challenges in order to carry out 4D modelling and the requisite multi-data source integration. It is necessary to identify the specifications, needs and requirements of the CH community to understand the required levels of 4D model information. In this way, it is possible to determine the optimum material and technologies to be utilised at different CH scales, as well as the data management and visualization requirements. This manuscript aims to provide a comprehensive approach for CH time-varying representations, analysis and visualization across different working scales and environments: rural landscape, urban landscape and architectural scales. Within this aim, the different available metric data sources are systemized and evaluated in terms of their suitability.

  1. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  2. Subjective visual vertical assessment with mobile virtual reality system

    Directory of Open Access Journals (Sweden)

    Ingrida Ulozienė

    Full Text Available Background and objective: The subjective visual vertical (SVV is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. Materials and methods: In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions – static, dynamic and an immersive real-world (“boat in the sea” SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. Results: There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two

  3. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  4. PAVA: Physiological and Anatomical Visual Analytics for Mapping of Tissue-Specific Concentration and Time-Course Data

    Science.gov (United States)

    We describe the development and implementation of a Physiological and Anatomical Visual Analytics tool (PAVA), a web browser-based application, used to visualize experimental/simulated chemical time-course data (dosimetry), epidemiological data and Physiologically-Annotated Data ...

  5. Developing a Data Visualization System for the Bank of America Chicago Marathon (Chicago, Illinois USA).

    Science.gov (United States)

    Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David

    2016-10-01

    As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.

  6. Research on the Visual Processing System of the Punch Press

    Directory of Open Access Journals (Sweden)

    Sun Xuan

    2016-01-01

    Full Text Available Most of raw materials of small hardware processing for plate scraps, and it’s realized through the manual operation of ordinary punch, which way has the low production efficiency and the high labor intensity. In order to improve the automation level of production, developing and designing of a visual processing system for punch press manipulator which based on the MFC tools of Visual Studio software platform. Through the image acquisition and image processing, get the information about the board to be processed, such as shape, length, the center of gravity position and pose, and providing relevant parameters for positioning gripping and placing into the punch table positioning of the feeding manipulator and automatic programming of punching machine, so as to realize the automatic operation about press feeding and processing.

  7. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  8. Efficient Delivery and Visualization of Long Time-Series Datasets Using Das2 Tools

    Science.gov (United States)

    Piker, C.; Granroth, L.; Faden, J.; Kurth, W. S.

    2017-12-01

    For over 14 years the University of Iowa Radio and Plasma Wave Group has utilized a network transparent data streaming and visualization system for most daily data review and collaboration activities. This system, called Das2, was originally designed in support of the Cassini Radio and Plasma Wave Science (RPWS) investigation, but is now relied on for daily review and analysis of Voyager, Polar, Cluster, Mars Express, Juno and other mission results. In light of current efforts to promote automatic data distribution in space physics it seems prudent to provide an overview of our open source Das2 programs and interface definitions to the wider community and to recount lessons learned. This submission will provide an overview of interfaces that define the system, describe the relationship between the Das2 effort and Autoplot and will examine handling Cassini RPWS Wideband waveforms and dynamic spectra as examples of dealing with long time-series data sets. In addition, the advantages and limitations of the current Das2 tool set will be discussed, as well as lessons learned that are applicable to other data sharing initiatives. Finally, plans for future developments including improved catalogs to support 'no-software' data sources and redundant multi-server fail over, as well as new adapters for CSV (Comma Separated Values) and JSON (Javascript Object Notation) output to support Cassini closeout and the HAPI (Heliophysics Application Programming Interface) initiative are outlined.

  9. A Cooking Recipe Recommendation System with Visual Recognition of Food Ingredients

    Directory of Open Access Journals (Sweden)

    Keiji Yanai

    2014-04-01

    Full Text Available In this paper, we propose a cooking recipe recommendation system which runs on a consumer smartphone as an interactive mobile application. The proposed system employs real-time visual object recognition of food ingredients, and recommends cooking recipes related to the recognized food ingredients. Because of visual recognition, by only pointing a built-in camera on a smartphone to food ingredients, a user can get to know a related cooking recipes instantly. The objective of the proposed system is to assist people who cook to decide a cooking recipe at grocery stores or at a kitchen. In the current implementation, the system can recognize 30 kinds of food ingredient in 0.15 seconds, and it has achieved the 83.93% recognition rate within the top six candidates. By the user study, we confirmed the effectiveness of the proposed system.

  10. Time Error Analysis of SOE System Using Network Time Protocol

    International Nuclear Information System (INIS)

    Keum, Jong Yong; Park, Geun Ok; Park, Heui Youn

    2005-01-01

    To find the accuracy of time in the fully digitalized SOE (Sequence of Events) system, we used a formal specification of the Network Time Protocol (NTP) Version 3, which is used to synchronize time keeping among a set of distributed computers. Through constructing a simple experimental environments and experimenting internet time synchronization, we analyzed the time errors of local clocks of SOE system synchronized with a time server via computer networks

  11. Preliminary study of visual perspective in mental time travel in schizophrenia.

    Science.gov (United States)

    Wang, Ya; Wang, Yi; Zhao, Qing; Cui, Ji-Fang; Hong, Xiao-Hong; Chan, Raymond Ck

    2017-10-01

    This study explored specificity and visual perspective of mental time travel in schizophrenia. Fifteen patients with schizophrenia and 18 controls were recruited. Participants were asked to recall or imagine specific events according to cue words. Results showed that schizophrenia patients generated fewer specific events than controls, the recalled events were more specific than imagined events. Schizophrenia adopted less field perspective and more observer perspective than controls. These results suggested that patients with schizophrenia were impaired in mental time travel both in specificity and visual perspective. Further studies are needed to identify the underlying mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Distributed Systems for Problems in Robust Control and Visual Tracking

    National Research Council Canada - National Science Library

    Tannenbaum, Allen

    2000-01-01

    .... A key application is in controlled active vision, including visual tracking, the control of autonomous vehicles, motion planning, and the utilization of visual information in guidance and control...

  13. Interactive visualization system to analyze corrugated millimeter-waveguide component of ECH in nuclear fusion with FDTD simulation

    International Nuclear Information System (INIS)

    Kashima, N; Nakamura, H; Kubo, S; Tamura, Y; Ito, A M

    2014-01-01

    We have simulated distribution of electromagnetic waves through the system composed of miter bends by Finite-Difference Time-Domain (FDTD) simulation. We develop the interactive visualization system using a new interactive GUI system which is composed of the virtual reality system and android tablet to analyze the FDTD simulation. The effect of the waveguide system with grooves have been investigated to quantitatively by visualization system. Comparing waveguide system with grooves and without grooves, grooves have been confirmed to suppress the surface current at the metal surface. The surface current at complex shape such as the miter bend have been investigated

  14. Visual exploration of movement and event data with interactive time masks

    Directory of Open Access Journals (Sweden)

    Natalia Andrienko

    2017-03-01

    Full Text Available We introduce the concept of time mask, which is a type of temporal filter suitable for selection of multiple disjoint time intervals in which some query conditions fulfil. Such a filter can be applied to time-referenced objects, such as events and trajectories, for selecting those objects or segments of trajectories that fit in one of the selected time intervals. The selected subsets of objects or segments are dynamically summarized in various ways, and the summaries are represented visually on maps and/or other displays to enable exploration. The time mask filtering can be especially helpful in analysis of disparate data (e.g., event records, positions of moving objects, and time series of measurements, which may come from different sources. To detect relationships between such data, the analyst may set query conditions on the basis of one dataset and investigate the subsets of objects and values in the other datasets that co-occurred in time with these conditions. We describe the desired features of an interactive tool for time mask filtering and present a possible implementation of such a tool. By example of analysing two real world data collections related to aviation and maritime traffic, we show the way of using time masks in combination with other types of filters and demonstrate the utility of the time mask filtering. Keywords: Data visualization, Interactive visualization, Interaction technique

  15. Road Vehicle Monitoring System Based on Intelligent Visual Internet of Things

    Directory of Open Access Journals (Sweden)

    Qingwu Li

    2015-01-01

    Full Text Available In recent years, with the rapid development of video surveillance infrastructure, more and more intelligent surveillance systems have employed computer vision and pattern recognition techniques. In this paper, we present a novel intelligent surveillance system used for the management of road vehicles based on Intelligent Visual Internet of Things (IVIoT. The system has the ability to extract the vehicle visual tags on the urban roads; in other words, it can label any vehicle by means of computer vision and therefore can easily recognize vehicles with visual tags. The nodes designed in the system can be installed not only on the urban roads for providing basic information but also on the mobile sensing vehicles for providing mobility support and improving sensing coverage. Visual tags mentioned in this paper consist of license plate number, vehicle color, and vehicle type and have several additional properties, such as passing spot and passing moment. Moreover, we present a fast and efficient image haze removal method to deal with haze weather condition. The experiment results show that the designed road vehicle monitoring system achieves an average real-time tracking accuracy of 85.80% under different conditions.

  16. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  17. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  18. Scientific Visualization & Modeling for Earth Systems Science Education

    Science.gov (United States)

    Chaudhury, S. Raj; Rodriguez, Waldo J.

    2003-01-01

    Providing research experiences for undergraduate students in Earth Systems Science (ESS) poses several challenges at smaller academic institutions that might lack dedicated resources for this area of study. This paper describes the development of an innovative model that involves students with majors in diverse scientific disciplines in authentic ESS research. In studying global climate change, experts typically use scientific visualization techniques applied to remote sensing data collected by satellites. In particular, many problems related to environmental phenomena can be quantitatively addressed by investigations based on datasets related to the scientific endeavours such as the Earth Radiation Budget Experiment (ERBE). Working with data products stored at NASA's Distributed Active Archive Centers, visualization software specifically designed for students and an advanced, immersive Virtual Reality (VR) environment, students engage in guided research projects during a structured 6-week summer program. Over the 5-year span, this program has afforded the opportunity for students majoring in biology, chemistry, mathematics, computer science, physics, engineering and science education to work collaboratively in teams on research projects that emphasize the use of scientific visualization in studying the environment. Recently, a hands-on component has been added through science student partnerships with school-teachers in data collection and reporting for the GLOBE Program (GLobal Observations to Benefit the Environment).

  19. Head Worn Display System for Equivalent Visual Operations

    Science.gov (United States)

    Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob

    2009-01-01

    Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.

  20. ARM-based visual processing system for prosthetic vision.

    Science.gov (United States)

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind.

  1. A topological approach to migration and visualization of time-varying volume data

    International Nuclear Information System (INIS)

    Fujishiro, Issei; Otsuka, Rieko; Hamaoka, Aya; Takeshima, Yuriko; Takahashi, Shigeo

    2004-01-01

    Rapid advance in high performance computing and measurement technologies has recently made it possible to produce a stupendous amount of time-varying volume datasets in various disciplines. However, there exist a few known visual exploration tools which allow us to investigate the core of their complex behavior effectively. In this article, our previous approach to topological volume skeletonization is extended to capture the topological skeleton of a 4D volumetric field in terms of critical timing. A cyclic information drilldown scheme, termed T-map, is presented, where a wide choice of information visualization techniques are deployed so that the users are allowed to repeatedly squeeze partial spatiotemporal domains of interest until the size gets fitted into an available computing storage space, prior to topologically-accentuated visualization of the pinpointed volumetric domains. A case study with datasets from atomic collision research is performed to illustrate the feasibility of the present method. (author)

  2. Information processing in the primate visual system - An integrated systems perspective

    Science.gov (United States)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  3. Information Processing in the Primate Visual System: An Integrated Systems Perspective

    Science.gov (United States)

    van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  4. Fuel visual inspection system of the RTMIII; Sistema de inspeccion visual de combustible del RTMIII

    Energy Technology Data Exchange (ETDEWEB)

    Delfin L, A.; Castaneda J, G.; Mazon R, R.; Aguilar H, F. [ININ, Km. 36.5 Carretera Mexico-Toluca, Ocoyoacac, 52245 Estado de Mexico (Mexico)]. e-mail: rmr@nuclear.inin.mx

    2007-07-01

    The International Atomic Energy Agency (IAEA) through the RLA/04/18 project, Management of Irradiated Fuel in Research Reactors, it recommended among other that the participant countries (Brazil, Argentina, Chile, Peru and Mexico), develop tools to assure the integrity of the nuclear fuels used in the research reactors. The TRIGA Mark lll reactor (RTMIII) of the ININ, designed and built a system of visual inspection, that it uses a high radiation camera and image digitalisation. The project considers safety conditions of the personnel that carried out the activities of visual inspection, for that which the tool dives in the pool of the RTMIII, being held by an end in the superior part of the aluminium liner of the Reactor like it is shown in the plane No. 1. The primordial unit of the system is the visual equipment that corresponds to a camera of the Hydro-Technologie (HYTEC) VSLT 410N mark, designed to work in atmospheres under the water and/or in places of high risk. The camera has an unit of motorized orientation of stainless steel that can be rotated unboundedly in both senses, with variable speed by means of a control lever from the control unit. Together to this orientation unit is found the camera head, the one which is contained in an unit of motorized inclination of stainless steel that can be rotated azimuthally up to 370 degrees in both senses. The operation conditions of the camera are the following ones, temperature: 0 to 50 C, dose speed: {<=} 50 rad/h, operation depth: {<=} 30 mts, humidity (control unit): {<=} 80%. From the control unit it is derived an external device plug-n-play TV-Usb Aver Media marks whose function is to decode the video signal sent by the control unit and to transmit it to the computer where the image is captured in picture or video that is analyzed later on with any software ad hoc, that in our case we use the Quantikov Image Analyzer program for Windows 98 of the Dr. Lucio C. M. Pinto from Brazil who participates in the RLA

  5. Visualization of conduit-matrix conductivity differences in a karst aquifer using time-lapse electrical resistivity

    Science.gov (United States)

    Meyerhoff, Steven B.; Karaoulis, Marios; Fiebig, Florian; Maxwell, Reed M.; Revil, André; Martin, Jonathan B.; Graham, Wendy D.

    2012-12-01

    In the karstic upper Floridan aquifer, surface water flows into conduits of the groundwater system and may exchange with water in the aquifer matrix. This exchange has been hypothesized to occur based on differences in discharge at the Santa Fe River Sink-Rise system, north central Florida, but has yet to be visualized using any geophysical techniques. Using electrical resistivity tomography, we conducted a time-lapse study at two locations with mapped conduits connecting the Santa Fe River Sink to the Santa Fe River Rise to study changes of electrical conductivity during times of varying discharge over a six-week period. Our results show conductivity differences between matrix, conduit changes in resistivity occurring through time at the locations of mapped karst conduits, and changes in electrical conductivity during rainfall infiltration. These observations provide insight into time scales and matrix conduit conductivity differences, illustrating how surface water flow recharged to conduits may flow in a groundwater system in a karst aquifer.

  6. Time-dependent density functional theory/discrete reaction field spectra of open shell systems: The visual spectrum of [FeIII(PyPepS)2]- in aqueous solution.

    Science.gov (United States)

    van Duijnen, Piet Th; Greene, Shannon N; Richards, Nigel G J

    2007-07-28

    We report the calculated visible spectrum of [FeIII(PyPepS)2]- in aqueous solution. From all-classical molecular dynamics simulations on the solute and 200 water molecules with a polarizable force field, 25 solute/solvent configurations were chosen at random from a 50 ps production run and subjected the systems to calculations using time-dependent density functional theory (TD-DFT) for the solute, combined with a solvation model in which the water molecules carry charges and polarizabilities. In each calculation the first 60 excited states were collected in order to span the experimental spectrum. Since the solute has a doublet ground state several excitations to states are of type "three electrons in three orbitals," each of which gives rise to a manifold of a quartet and two doublet states which cannot properly be represented by single Slater determinants. We applied a tentative scheme to analyze this type of spin contamination in terms of Delta and Delta transitions between the same orbital pairs. Assuming the associated states as pure single determinants obtained from restricted calculations, we construct conformation state functions (CFSs), i.e., eigenfunctions of the Hamiltonian Sz and S2, for the two doublets and the quartet for each Delta,Delta pair, the necessary parameters coming from regular and spin-flip calculations. It appears that the lower final states remain where they were originally calculated, while the higher states move up by some tenths of an eV. In this case filtering out these higher states gives a spectrum that compares very well with experiment, but nevertheless we suggest investigating a possible (re)formulation of TD-DFT in terms of CFSs rather than determinants.

  7. Using spatial manipulation to examine interactions between visual and auditory encoding of pitch and time

    Directory of Open Access Journals (Sweden)

    Neil M McLachlan

    2010-12-01

    Full Text Available Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.

  8. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    Science.gov (United States)

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  9. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  10. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    Science.gov (United States)

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  11. A system for visualization and automatic placement of the endoclamp balloon catheter

    Science.gov (United States)

    Furtado, Hugo; Stüdeli, Thomas; Sette, Mauro; Samset, Eigil; Gersak, Borut

    2010-02-01

    The European research network "Augmented Reality in Surgery" (ARIS*ER) developed a system that supports minimally invasive cardiac surgery based on augmented reality (AR) technology. The system supports the surgical team during aortic endoclamping where a balloon catheter has to be positioned and kept in place within the aorta. The presented system addresses the two biggest difficulties of the task: lack of visualization and difficulty in maneuvering the catheter. The system was developed using a user centered design methodology with medical doctors, engineers and human factor specialists equally involved in all the development steps. The system was implemented using the AR framework "Studierstube" developed at TU Graz and can be used to visualize in real-time the position of the balloon catheter inside the aorta. The spatial position of the catheter is measured by a magnetic tracking system and superimposed on a 3D model of the patient's thorax. The alignment is made with a rigid registration algorithm. Together with a user defined target, the spatial position data drives an actuator which adjusts the position of the catheter in the initial placement and corrects migrations during the surgery. Two user studies with a silicon phantom show promising results regarding usefulness of the system: the users perform the placement tasks faster and more accurately than with the current restricted visual support. Animal studies also provided a first indication that the system brings additional value in the real clinical setting. This work represents a major step towards safer and simpler minimally invasive cardiac surgery.

  12. Comparison of Sprint Reaction and Visual Reaction Times of Athletes in Different Branches

    Science.gov (United States)

    Akyüz, Murat; Uzaldi, Basar Basri; Akyüz, Öznur; Dogru, Yeliz

    2017-01-01

    The aims of this study are to analyse sprint reaction and visual reaction times of female athletes of different branches competing in Professional leagues and to show the differences between them. 42 voluntary female athletes from various branches of Professional leagues of Istanbul (volleyball, basketball, handball) were included in the…

  13. Long-Term Priming of Visual Search Prevails against the Passage of Time and Counteracting Instructions

    Science.gov (United States)

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Studies on "intertrial priming" have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change--so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of…

  14. Signs over time: Statistical and visual analysis of a longitudinal signed network

    NARCIS (Netherlands)

    de Nooy, W.

    2008-01-01

    This paper presents the design and results of a statistical and visual analysis of a dynamic signed network. In addition to prevalent approaches to longitudinal networks, which analyze series of cross-sectional data, this paper focuses on network data measured in continuous time in order to explain

  15. Influences of Visual Attention and Reading Time on Children and Adults

    Science.gov (United States)

    Wei, Chun-Chun; Ma, Min-Yuan

    2017-01-01

    This study investigates the relationship between visual attention and reading time using a mobile electroencephalography device. The mobile electroencephalography device uses a single channel dry sensor, which easily measures participants' attention in the real-world reading environment. The results reveal that age significantly influences visual…

  16. Time Limits in Testing: An Analysis of Eye Movements and Visual Attention in Spatial Problem Solving

    Science.gov (United States)

    Roach, Victoria A.; Fraser, Graham M.; Kryklywy, James H.; Mitchell, Derek G. V.; Wilson, Timothy D.

    2017-01-01

    Individuals with an aptitude for interpreting spatial information (high mental rotation ability: HMRA) typically master anatomy with more ease, and more quickly, than those with low mental rotation ability (LMRA). This article explores how visual attention differs with time limits on spatial reasoning tests. Participants were assorted to two…

  17. Interactive visualization of gene regulatory networks with associated gene expression time series data

    NARCIS (Netherlands)

    Westenberg, M.A.; Hijum, van S.A.F.T.; Lulko, A.T.; Kuipers, O.P.; Roerdink, J.B.T.M.; Linsen, L.; Hagen, H.; Hamann, B.

    2008-01-01

    We present GENeVis, an application to visualize gene expression time series data in a gene regulatory network context. This is a network of regulator proteins that regulate the expression of their respective target genes. The networks are represented as graphs, in which the nodes represent genes,

  18. Real-time MEG neurofeedback training of posterior alpha activity modulates subsequent visual detection performance

    NARCIS (Netherlands)

    Okazaki, Y.O.; Horschig, J.; Luther, L.M.; Oostenveld, R.; Murakami, I.; Jensen, O.

    2015-01-01

    It has been demonstrated that alpha activity is lateralized when attention is directed to the left or right visual hemifield. We investigated whether real-time neurofeedback training of the alpha lateralization enhances participants' ability to modulate posterior alpha lateralization and causes

  19. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    Science.gov (United States)

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  20. What Are the Shapes of Response Time Distributions in Visual Search?

    Science.gov (United States)

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  1. Visual Attention During Brand Choice : The Impact of Time Pressure and Task Motivation

    NARCIS (Netherlands)

    Pieters, R.; Warlop, L.

    1998-01-01

    Measures derived from eye-movement data reveal that during brand choice consumers adapt to time pressure by accelerating the visual scanning sequence, by filtering information and by changing their scanning strategy. In addition, consumers with high task motivation filter brand information less and

  2. Visual outcomes in relation to time to treatment in neovascular age-related macular degeneration

    DEFF Research Database (Denmark)

    Rasmussen, Annette; Bloch, Sara Brandi; Fuchs, Josefine

    2015-01-01

    PURPOSE: To study the relation between the interval from diagnosis to initiation of intravitreal injection therapy and visual outcome in neovascular age-related macular degeneration (nAMD) and to report changes over time in fellow-eye status. METHODS: Retrospective chart review. The study included...

  3. A visual retrieval environment for hypermedia information system

    Energy Technology Data Exchange (ETDEWEB)

    Lucarella, D; Zanzi, A [ENEL s.p.a., Centro Ricerca di Automatica, Cologno Monzese, Milan (Italy)

    1995-03-01

    The authors a graph-based object model that may be used as a uniform framework for direct manipulation of multimedia information. After an introduction motivating the need for abstraction and structuring mechanisms in hypermedia systems, the authors introduce the data model and the notion of perspective, a form of data abstraction that acts as a user interface to the system, providing control over the visibility of the objects and their properties. A perspective is defined to include an intention and an extension. The authors present a visual retrieval environment that effectively combines filtering, browsing, and navigation to provide an integrated view of the retrieval problem. Design and implementation issues are outlined for MORE (Multimedia Object Retrieval Environment), a prototype system relying on the proposed model. The focus is on the main user interface functionalities, and actual interaction sessions are presented including schema creation, information loading, and information retrieval

  4. Improvement of visual debugging tool. Shortening the elapsed time for getting data and adding new functions to compare/combine a set of visualized data

    International Nuclear Information System (INIS)

    Matsuda, Katsuyuki; Takemiya, Hiroshi

    2001-03-01

    The visual debugging tool 'vdebug' has been improved, which was designed for the debugging of programs for scientific computing. Improved were the following two points; (1) shortening the elapsed time required for getting appropriate data to visualize; (2) adding new functions which enable to compare and/or combine a set of visualized data originated from two or more different programs. As for shortening elapsed time for getting data, with the improved version of 'vdebug', we could achieve the following results; over hundred times shortening the elapsed time with dbx, pdbx of SX-4 and over ten times with ndb of SR2201. As for the new functions to compare/combine visualized data, it was confirmed that we could easily checked the consistency between the computational results obtained in each calculational steps on two different computers: SP and ONYX. In this report, we illustrate how the tool 'vdebug' has been improved with an example. (author)

  5. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    Science.gov (United States)

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Near Real-time Scientific Data Analysis and Visualization with the ArcGIS Platform

    Science.gov (United States)

    Shrestha, S. R.; Viswambharan, V.; Doshi, A.

    2017-12-01

    Scientific multidimensional data are generated from a variety of sources and platforms. These datasets are mostly produced by earth observation and/or modeling systems. Agencies like NASA, NOAA, USGS, and ESA produce large volumes of near real-time observation, forecast, and historical data that drives fundamental research and its applications in larger aspects of humanity from basic decision making to disaster response. A common big data challenge for organizations working with multidimensional scientific data and imagery collections is the time and resources required to manage and process such large volumes and varieties of data. The challenge of adopting data driven real-time visualization and analysis, as well as the need to share these large datasets, workflows, and information products to wider and more diverse communities, brings an opportunity to use the ArcGIS platform to handle such demand. In recent years, a significant effort has put in expanding the capabilities of ArcGIS to support multidimensional scientific data across the platform. New capabilities in ArcGIS to support scientific data management, processing, and analysis as well as creating information products from large volumes of data using the image server technology are becoming widely used in earth science and across other domains. We will discuss and share the challenges associated with big data by the geospatial science community and how we have addressed these challenges in the ArcGIS platform. We will share few use cases, such as NOAA High Resolution Refresh Radar (HRRR) data, that demonstrate how we access large collections of near real-time data (that are stored on-premise or on the cloud), disseminate them dynamically, process and analyze them on-the-fly, and serve them to a variety of geospatial applications. We will also share how on-the-fly processing using raster functions capabilities, can be extended to create persisted data and information products using raster analytics

  7. Systems, Shocks and Time Bombs

    Science.gov (United States)

    Winder, Nick

    The following sections are included: * Introduction * Modelling strategies * Are time-bomb phenomena important? * Heuristic approaches to time-bomb phenomena * Three rational approaches to TBP * Two irrational approaches * Conclusions * References

  8. Time Warp Operating System (TWOS)

    Science.gov (United States)

    Bellenot, Steven F.

    1993-01-01

    Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.

  9. Lighting and Graphics Effects for Real-Time Visualization of the Universe

    OpenAIRE

    Ekelin, Jonna; Fernqvist, Lena

    2006-01-01

    This work has been performed at SCISS AB, a company situated in Norrk öping and whose business lies in developing platforms for graphics visualization. SCISS's main software product, UniView, is a fully interactive system allowing the user to explore all parts of the observable universe, from rocks on the surface of a planet to galaxies and quasars in outer space. It is used mainly for astronomical and scientic presentation. The aim of this work has been to enhance the visual appearance of li...

  10. Servicescapes seen by visually impaired travellers : Time-geography approach to servicescape research

    OpenAIRE

    Raissova, Alma

    2017-01-01

    Knowledge gaps remain in the study of servicescapes, since existing research on servicescapes tends to ignore major advances in the understanding of space and time as social phenomena. One aspect that particularly requires further study is how emerging constraints influence customers’ interactions with organized service places. The time-geography approach was therefore applied to the current servicescape research to help to identify various constraints that blind and visually disabled persons...

  11. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  12. Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.

    Science.gov (United States)

    Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele

    2017-12-01

    Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.

  13. Rock Visualization System. Technical description (RVS v.3.5)

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, P.; Elfstroem, M.; Markstroem, I. [FB Engineering, Goeteborg (Sweden)

    2004-03-01

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. It is a description of RVS version 3.5. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples; Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carryout a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system.

  14. Rock Visualization System. Technical description (RVS version 3.8)

    International Nuclear Information System (INIS)

    Curtis, P.; Elfstroem, M.; Markstroem, I.

    2007-06-01

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. The report describes RVS version 4.0. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples. Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carry out a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system

  15. Rock Visualization System. Technical description (RVS version 3.8)

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, P.; Elfstroem, M.; Markstroem, I. [FB Engineering, Goeteborg (Sweden)

    2005-04-01

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. The report describes RVS version 3.8. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples. Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carryout a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system.

  16. Mutational Analysis of Drosophila Basigin Function in the Visual System

    Science.gov (United States)

    Munro, Michelle; Akkam, Yazan; Curtin, Kathryn D.

    2009-01-01

    Drosophila basigin is a cell-surface glycoprotein of the Ig superfamily and a member of a protein family that includes mammalian EMMPRIN/CD147/basigin, neuroplastin, and embigin. Our previous work on Drosophila basigin has shown that it is required for normal photoreceptor cell structure and normal neuron-glia interaction in the fly visual system. Specifically, the photoreceptor neurons of mosaic animals that are mutant in the eye for basigin show altered cell structure with nuclei, mitochondria and rER misplaced and variable axon diameter compared to wild-type. In addition, glia cells in the optic lamina that contact photoreceptor axons are misplaced and show altered structure. All these defects are rescued by expression of either transgenic fly basigin or transgenic mouse basigin in the photoreceptors demonstrating that mouse basigin can functionally replace fly basigin. To determine what regions of the basigin protein are required for each of these functions, we have created mutant basigin transgenes coding for proteins that are altered in conserved residues, introduced these into the fly genome, and tested them for their ability to rescue both photoreceptor cell structure defects and neuron-glia interaction defects of basigin. The results suggest that the highly conserved transmembrane domain and the extracellular domains are crucial for basigin function in the visual system while the short intracellular tail may not play a role in these functions. PMID:19782733

  17. Rock Visualization System. Technical description (RVS version 3.8)

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, P.; Elfstroem, M.; Markstroem, I. [Golder Associates AB (Sweden)

    2007-06-15

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. The report describes RVS version 4.0. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples. Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carry out a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system.

  18. Rock Visualization System. Technical description (RVS v.3.5)

    International Nuclear Information System (INIS)

    Curtis, P.; Elfstroem, M.; Markstroem, I.

    2004-03-01

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. It is a description of RVS version 3.5. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples; Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carryout a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system

  19. Rock Visualization System. Technical description (RVS version 3.8)

    International Nuclear Information System (INIS)

    Curtis, P.; Elfstroem, M.; Markstroem, I.

    2005-04-01

    The Rock Visualization System (RVS) has been developed by SKB for use in visualizing geological and engineering data in 3D. The purpose of this report is to provide a technical description of RVS aimed at potential program users and interested parties as well as fulfilling the function of a more general RVS reference that can be cited when writing other technical reports. The report describes RVS version 3.8. Updated versions of this report or addenda will be made available following further development of RVS and the release of subsequent versions of the program. The report covers the following main items: Technical description of the program with illustrations and examples. Limitations of the program and of functionality. For most RVS functions step-by-step tutorials are available describing how a particular function can be used to carryout a specific task. A complete set of updated tutorials is issued with each new version release of the RVS program. However, the tutorials do not cover all the possible uses of all the individual functions but rather give an overall view of their functionality. A detailed description of every RVS function and how it can be used is included in the RVS online Help system

  20. Time-Sharing-Based Synchronization and Performance Evaluation of Color-Independent Visual-MIMO Communication.

    Science.gov (United States)

    Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo

    2018-05-14

    In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.

  1. Role of time in symbiotic systems

    Energy Technology Data Exchange (ETDEWEB)

    Agrawala, A.K. [Univ. of Maryland, College Park, MD (United States)

    1996-12-31

    All systems have a dynamics which reflects the changes in the system in time and, therefore, have to maintain a notion of time, either explicitly or implicitly. Traditionally, the notion of time in constructed systems has been implicitly specified at design time through rigid structures such as sampled data systems which operate with a fixed time tick, feedback systems which are designed reflecting a fixed time scale for the dynamics of the system as well as the controller responses, etc. In biological systems, the sense of time is a key element but it is not rigidly structured, even though all such systems have a clear notion of time. We define the notion of time in systems in terms of temporal locality, time scale and time horizon. Temporal locality gives the notion of the accuracy with which the system knows about the current time. Time scale reflects the scale indicating the smallest and the largest granularity considered. It also reflects the reaction time. The time horizon indicates the time beyond which the system considers to be distant future and may not take it into account in its actions. Note that the temporal locality, time scale and the time horizon may be different for different types of actions of a system, thereby permitting the system to use multiple notions of time concurrently. In multi agent systems each subsystem may have its own notion of time but when intentions take place a coordination is necessary. Such coordination requires that the notions of time for different agents of the system be consistent. Clearly, the consistency requirement in this case does not mean exactly identical but implies that different agents can coordinate their actions which must take place in time. When the actions only require a determinate ordering the required coordination is much less severe than the case requiring actions to take place at the same time.

  2. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    Science.gov (United States)

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  3. Relationship between reaction time, fine motor control, and visual-spatial perception on vigilance and visual-motor tasks in 22q11.2 Deletion Syndrome.

    LENUS (Irish Health Repository)

    Howley, Sarah A

    2012-10-15

    22q11.2 Deletion Syndrome (22q11DS) is a common microdeletion disorder associated with mild to moderate intellectual disability and specific neurocognitive deficits, particularly in visual-motor and attentional abilities. Currently there is evidence that the visual-motor profile of 22q11DS is not entirely mediated by intellectual disability and that these individuals have specific deficits in visual-motor integration. However, the extent to which attentional deficits, such as vigilance, influence impairments on visual motor tasks in 22q11DS is unclear. This study examines visual-motor abilities and reaction time using a range of standardised tests in 35 children with 22q11DS, 26 age-matched typically developing (TD) sibling controls and 17 low-IQ community controls. Statistically significant deficits were observed in the 22q11DS group compared to both low-IQ and TD control groups on a timed fine motor control and accuracy task. The 22q11DS group performed significantly better than the low-IQ control group on an untimed drawing task and were equivalent to the TD control group on point accuracy and simple reaction time tests. Results suggest that visual motor deficits in 22q11DS are primarily attributable to deficits in psychomotor speed which becomes apparent when tasks are timed versus untimed. Moreover, the integration of visual and motor information may be intact and, indeed, represent a relative strength in 22q11DS when there are no time constraints imposed. While this may have significant implications for cognitive remediation strategies for children with 22q11DS, the relationship between reaction time, visual reasoning, cognitive complexity, fine motor speed and accuracy, and graphomotor ability on visual-motor tasks is still unclear.

  4. A Generalized Visual Aid System for Teleoperation Applied to Satellite Servicing

    Directory of Open Access Journals (Sweden)

    Guoliang Zhang

    2014-02-01

    Full Text Available This paper presents the latest results of a newly developed visual aid system for direct teleoperation. This method is extended to visual control to make an efficient teleoperation system by combining direct teleoperation and automatic control. On the one hand, an operator can conduct direct teleoperation with 3D graphic prediction simulation established by the VR technique. In order to remove inconsistencies between the virtual and real environments, a practical model-matching method is investigated. On the other hand, to realize real-time visual servoing control, a particular object recognition and pose estimation algorithm based on polygonal approximation is investigated to ensure a low computational cost for image processing. To avoid undesired forces involved in contact operation, 3D visual servoing incorporating a compliant control based on impedance control is developed. Finally, in a representative laboratory environment, a typical satellite servicing experiment is carried out based on this combined system. Experimental results demonstrate the feasibility and the effectiveness of the proposed method.

  5. Exciplex Fluorescence Systems for Two-Phase Visualization.

    Science.gov (United States)

    Kim, J.-U.; Golding, B.; Schock, H. J.; Nocera, D. G.; Keller, P.

    1996-03-01

    We report the development of diagnostic chemical systems for vapor-liquid visualization based on an exciplex (excited state complex) formed between dimethyl- or diethyl-substituted aniline and trimethyl-substituted naphthalenes. Quantum yields of individual monomers were measured and the exciplex emission spectra as well as fluorescence quenching mechanisms were analyzed. Quenching occurs by both static and dynamic mechanisms. Among the many formulations investigated in this study, a system consisting of 7% 1,4,6-trimethylnaphthalene (1,4,6-TMN) and 5% N,N-dimethylaniline (DMA) in 88% isooctane exciplex was found to be useful for the laser- induced fluorescence technique. The technique is expected to find application in observing mixture formation in diesel or spark ignition engines with spectrally well-separated fluorescence images obtained from the monomer and exciplex constituents dissolved in the gasoline fuel. *Supported by NSF MRSEC DMR-9400417 and the Center for Fundamental Materials Research.

  6. Fluorescent visualization of oxytocin in the hypothalamo-neurohypophysial system

    Directory of Open Access Journals (Sweden)

    Hirofumi eHashimoto

    2014-07-01

    Full Text Available Oxytocin (OXT is well known for its ability to the milk ejection reflex and uterine contraction. It is also involved in several other behaviors, such as anti-nociception, anxiety, feeding, social recognition and stress responses. OXT is synthesized in the magnocellular neurosecretory cells (MNCs in the hypothalamic paraventricular (PVN and the supraoptic nuclei (SON that terminate their axons in the posterior pituitary (PP. We generated transgenic rats that express the OXT and fluorescent protein fusion gene in order to visualize oxytocin in the hypothalamo-neurohypophysial system. In these transgenic rats, fluorescent proteins were observed in the MNCs and axon terminals in the PP. This transgenic rat is a new tool to study the physiological role of OXT in the hypothalamo-neurohypophysial system.

  7. Rapid and Parallel Adaptive Evolution of the Visual System of Neotropical Midas Cichlid Fishes.

    Science.gov (United States)

    Torres-Dowdall, Julián; Pierotti, Michele E R; Härer, Andreas; Karagic, Nidal; Woltering, Joost M; Henning, Frederico; Elmer, Kathryn R; Meyer, Axel

    2017-10-01

    Midas cichlid fish are a Central American species flock containing 13 described species that has been dated to only a few thousand years old, a historical timescale infrequently associated with speciation. Their radiation involved the colonization of several clear water crater lakes from two turbid great lakes. Therefore, Midas cichlids have been subjected to widely varying photic conditions during their radiation. Being a primary signal relay for information from the environment to the organism, the visual system is under continuing selective pressure and a prime organ system for accumulating adaptive changes during speciation, particularly in the case of dramatic shifts in photic conditions. Here, we characterize the full visual system of Midas cichlids at organismal and genetic levels, to determine what types of adaptive changes evolved within the short time span of their radiation. We show that Midas cichlids have a diverse visual system with unexpectedly high intra- and interspecific variation in color vision sensitivity and lens transmittance. Midas cichlid populations in the clear crater lakes have convergently evolved visual sensitivities shifted toward shorter wavelengths compared with the ancestral populations from the turbid great lakes. This divergence in sensitivity is driven by changes in chromophore usage, differential opsin expression, opsin coexpression, and to a lesser degree by opsin coding sequence variation. The visual system of Midas cichlids has the evolutionary capacity to rapidly integrate multiple adaptations to changing light environments. Our data may indicate that, in early stages of divergence, changes in opsin regulation could precede changes in opsin coding sequence evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Development of analysis software for radiation time-series data with the use of visual studio 2005

    International Nuclear Information System (INIS)

    Hohara, Sin-ya; Horiguchi, Tetsuo; Ito, Shin

    2008-01-01

    Time-Series Analysis supplies a new vision that conventional analysis methods such as energy spectroscopy haven't achieved ever. However, application of time-series analysis to radiation measurements needs much effort in software and hardware development. By taking advantage of Visual Studio 2005, we developed an analysis software, 'ListFileConverter', for time-series radiation measurement system called as 'MPA-3'. The software is based on graphical user interface (GUI) architecture that enables us to save a large amount of operation time in the analysis, and moreover to make an easy-access to special file structure of MPA-3 data. In this paper, detailed structure of ListFileConverter is fully explained, and experimental results for counting capability of MPA-3 hardware system and those for neutron measurements with our UTR-KINKI reactor are also given. (author)

  9. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    Science.gov (United States)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  10. Visual plasticity : Blindsight bridges anatomy and function in the visual system

    NARCIS (Netherlands)

    Tamietto, M.; Morrone, M.C.

    2016-01-01

    Some people who are blind due to damage to their primary visual cortex, V1, can discriminate stimuli presented within their blind visual field. This residual function has been recently linked to a pathway that bypasses V1, and connects the thalamic lateral geniculate nucleus directly with the

  11. Adjustment to subtle time constraints and power law learning in rapid serial visual presentation

    Directory of Open Access Journals (Sweden)

    Jacqueline Chakyung Shin

    2015-11-01

    Full Text Available We investigated whether attention could be modulated through the implicit learning of temporal information in a rapid serial visual presentation (RSVP task. Participants identified two target letters among numeral distractors. The stimulus-onset asynchrony immediately following the first target (SOA1 varied at three levels (70, 98, and 126 ms randomly between trials or fixed within blocks of trials. Practice over three consecutive days resulted in a continuous improvement in the identification rate for both targets and attenuation of the attentional blink (AB, a decrement in target (T2 identification when presented 200-400 ms after another target (T1. Blocked SOA1s led to a faster rate of improvement in RSVP performance and more target order reversals relative to random SOA1s, suggesting that the implicit learning of SOA1 positively affected performance. The results also reveal power law learning curves for individual target identification as well as the reduction in the AB decrement. These learning curves reflect the spontaneous emergence of skill through subtle attentional modulations rather than general attentional distribution. Together, the results indicate that implicit temporal learning could improve high level and rapid cognitive processing and highlights the sensitivity and adaptability of the attentional system to subtle constraints in stimulus timing.

  12. The Influence of Errors in Visualization Systems on the Level of Safety Threat in Air Traffic

    Directory of Open Access Journals (Sweden)

    Paweł Ferduła

    2018-01-01

    Full Text Available Air traffic management is carried out by air traffic controllers assisted by complex technical systems that provide them with visualization of the traffic situation. In practice, visualization systems errors sometimes occur. The purpose of this paper is to determine the impact of errors of different types on the safety of the air traffic. The assessment of the threat level is influenced by subjective factors and cannot be expressed precisely. Therefore, the fuzzy reasoning theory has been used. The developed fuzzy model has been used to obtain a tool for simulation of the impact of various factors on traffic safety assessment. The results obtained indicate that the most important determinants of safety are the time when the air traffic controller remains unaware of the breakdown and the total time he/she does not have full knowledge of the traffic situation. It has been found that the key role for the proper operation of the air traffic visualization system and the restoration of full situational awareness is played by self-diagnostic systems that can restore the system’s correct functioning without even the controller being aware of the error occurrence. Their role in ensuring safety might be even greater than redundancy which is commonly used.

  13. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  14. Driver Drowsiness Warning System Using Visual Information for Both Diurnal and Nocturnal Illumination Conditions

    Directory of Open Access Journals (Sweden)

    Flores MarcoJavier

    2010-01-01

    Full Text Available Every year, traffic accidents due to human errors cause increasing amounts of deaths and injuries globally. To help reduce the amount of fatalities, in the paper presented here, a new module for Advanced Driver Assistance System (ADAS which deals with automatic driver drowsiness detection based on visual information and Artificial Intelligence is presented. The aim of this system is to locate, track, and analyze both the drivers face and eyes to compute a drowsiness index, where this real-time system works under varying light conditions (diurnal and nocturnal driving. Examples of different images of drivers taken in a real vehicle are shown to validate the algorithms used.

  15. Visually Augmented Analysis of Socio-Technical Networks in Engineering Systems Design Research

    DEFF Research Database (Denmark)

    Storga, M.; Stankovic, T.; Cash, Philip

    2013-01-01

    In characterizing systems behaviour, complex-systems scientists use tools from a variety of disciplines, including nonlinear dynamics, information theory, computation theory, evolutionary biology and social network analysis, among others. All of these topics have been studied for some time......, but only fairly recently has the study of networks in general become a major topic of research in complex engineering systems. The research reported in this paper is discussing how the visually augmented analysis of complex socio-networks (networks of people and technology engaged in a product...

  16. Time course influences transfer of visual perceptual learning across spatial location.

    Science.gov (United States)

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Real-time dose calculation and visualization for the proton therapy of ocular tumours

    Energy Technology Data Exchange (ETDEWEB)

    Pfeiffer, Karsten [Medizinische Physik, Deutsches Krebsforschungszentrum, INF 280, D-69120 Heidelberg (Germany). E-mail: k.pfeiffer at dkfz.de; Bendl, Rolf [Medizinische Physik, Deutsches Krebsforschungszentrum, INF 280, D-69120 Heidelberg (Germany). E-mail: r.bendl at dkfz.de

    2001-03-01

    A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach. (author)

  18. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Directory of Open Access Journals (Sweden)

    Roger W Li

    2011-08-01

    Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia

  19. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Science.gov (United States)

    Li, Roger W; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M

    2011-08-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40-80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other

  20. Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia

    Science.gov (United States)

    Li, Roger W.; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M.

    2011-01-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15–61 y; visual acuity: 20/25–20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40–80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps

  1. Optimization of visual evoked potential (VEP) recording systems.

    Science.gov (United States)

    Karanjia, Rustum; Brunet, Donald G; ten Hove, Martin W

    2009-01-01

    To explore the influence of environmental conditions on pattern visual evoked potential (VEP) recordings. Fourteen subjects with no known ocular pathology were recruited for the study. In an attempt to optimize the recording conditions, VEP recordings were performed in both the seated and recumbent positions. Comparisons were made between recordings using either LCD or CRT displays and recordings obtained in silence or with quiet background music. Paired recordings (in which only one variable was changed) were analyzed for changes in P100 latency, RMS noise, and variability. Baseline RMS noise demonstrated a significant decrease in the variability during the first 50msec accompanied by a 73% decrease in recording time for recumbent position when compared to the seated position (pmusic did not affect the amount of RMS noise during the first 50msec of the recordings. This study demonstrates that the use of the recumbent position increases patient comfort and improves the signal to noise ratio. In contrast, the addition of background music to relax the patient did not improve the recording signal. Furthermore, the study illustrates the importance of avoiding low-contrast visual stimulation patterns obtained with LCD as they lead to higher latencies resulting in false positive recordings. These findings are important when establishing or modifying a pattern VEP recording protocol.

  2. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  3. Therapeutic Options for Controlling Fluids in the Visual System

    Science.gov (United States)

    Curry, Kristina M.; Wotring, Virginia E.

    2014-01-01

    Visual Impairment/Intracranial Pressure (VIIP) is a newly recognized risk at NASA. The VIIP project examines the effect of long-term exposure to microgravity on vision of crewmembers before and after they return to Earth. Diamox (acetazolamide) is a medication which is used to decrease intraocular pressure; however, it carries a 3% risk of kidney stones. Astronauts are at a higher risk of kidney stones during spaceflight and the use Diamox would only increase the risk; therefore alternative therapies were investigated. Histamine 2 (H2) antagonist acid blockers such as cimetidine, ranitidine, famotidine and nizatidine are typically used to relieve the symptoms of gastroesophageal reflux disease (GERD). H2 receptors have been found in the human visual system, which has led to research on the use of H2 antagonist blockers to control fluid production in the human eye. Another potential therapeutic strategy is targeted at aquaporins, which are water channels that help maintain fluid homeostasis. Aquaporin antagonists are also known to affect intracranial pressure which can in turn alter intraocular pressure. Studies on aquaporin antagonists suggest high potential for effective treatment. The primary objective of this investigation is to review existing research on alternate medications or therapy to significantly reduce intracranial and intraocular pressure. A literature review was conducted. Even though we do not have all the answers quite yet, a considerable amount of information was discovered, and findings were narrowed, which should allow for more conclusive answers to be found in the near future.

  4. Memorized discrete systems and time-delay

    CERN Document Server

    Luo, Albert C J

    2017-01-01

    This book examines discrete dynamical systems with memory—nonlinear systems that exist extensively in biological organisms and financial and economic organizations, and time-delay systems that can be discretized into the memorized, discrete dynamical systems. It book further discusses stability and bifurcations of time-delay dynamical systems that can be investigated through memorized dynamical systems as well as bifurcations of memorized nonlinear dynamical systems, discretization methods of time-delay systems, and periodic motions to chaos in nonlinear time-delay systems. The book helps readers find analytical solutions of MDS, change traditional perturbation analysis in time-delay systems, detect motion complexity and singularity in MDS; and determine stability, bifurcation, and chaos in any time-delay system.

  5. Real Time Advanced Clustering System

    Directory of Open Access Journals (Sweden)

    Giuseppe Spampinato

    2017-05-01

    Full Text Available This paper describes a system to gather information from a stationary camera to identify moving objects. The proposed solution makes only use of motion vectors between adjacent frames, obtained from any algorithm. Starting from them, the system is able to retrieve clusters of moving objects in a scene acquired by an image sensor device. Since all the system is only based on optical flow, it is really simple and fast, to be easily integrated directly in low cost cameras. The experimental results show fast and robust performance of our method. The ANSI-C code has been tested on the ARM Cortex A15 CPU @2.32GHz, obtaining an impressive fps, about 3000 fps, excluding optical flow computation and I/O. Moreover, the system has been tested for different applications, cross traffic alert and video surveillance, in different conditions, indoor and outdoor, and with different lenses.

  6. Real-time systems architectures

    International Nuclear Information System (INIS)

    Sendall, D.M.

    1986-01-01

    The aim of this paper is to explore some of the design issues in online data acquisition and monitoring systems for high-energy physics experiments. In particular it concentrates on the multi-processor aspects of the design of existing and planned experiments. The central problem to be solved by these systems is the readout and checking of the apparatus, and the recording and perhaps some processing of the data. (Auth.)

  7. Application and API for Real-time Visualization of Ground-motions and Tsunami

    Science.gov (United States)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, T.; Nakamura, H.; Azuma, H.; Fujiwara, H.

    2015-12-01

    Due to the recent progress of seismograph and communication environment, real-time and continuous ground-motion observation becomes technically and economically feasible. K-NET and KiK-net, which are nationwide strong motion networks operated by NIED, cover all Japan by about 1750 stations in total. More than half of the stations transmit the ground-motion indexes and/or waveform data in every second. Traditionally, strong-motion data were recorded by event-triggering based instruments with non-continues telephone line which is connected only after an earthquake. Though the data from such networks mainly contribute to preparations for future earthquakes, huge amount of real-time data from dense network are expected to directly contribute to the mitigation of ongoing earthquake disasters through, e.g., automatic shutdown plants and helping decision-making for initial response. By generating the distribution map of these indexes and uploading them to the website, we implemented the real-time ground motion monitoring system, Kyoshin (strong-motion in Japanese) monitor. This web service (www.kyoshin.bosai.go.jp) started in 2008 and anyone can grasp the current ground motions of Japan. Though this service provides only ground-motion map in GIF format, to take full advantage of real-time strong-motion data to mitigate the ongoing disasters, digital data are important. We have developed a WebAPI to provide real-time data and related information such as ground motions (5 km-mesh) and arrival times estimated from EEW (earthquake early warning). All response data from this WebAPI are in JSON format and are easy to parse. We also developed Kyoshin monitor application for smartphone, 'Kmoni view' using the API. In this application, ground motions estimated from EEW are overlapped on the map with the observed one-second-interval indexes. The application can playback previous earthquakes for demonstration or disaster drill. In mobile environment, data traffic and battery are

  8. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    Science.gov (United States)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  9. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    Energy Technology Data Exchange (ETDEWEB)

    Ozana, Stepan, E-mail: stepan.ozana@vsb.cz; Pies, Martin, E-mail: martin.pies@vsb.cz; Docekal, Tomas, E-mail: docekalt@email.cz [VSB-Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Cybernetics and Biomedical Engineering, 17. listopadu 15/2172, Ostrava-Poruba, 700 30 (Czech Republic)

    2016-06-08

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.

  10. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    International Nuclear Information System (INIS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-01-01

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.

  11. Case studies on design, simulation and visualization of control and measurement applications using REX control system

    Science.gov (United States)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.

  12. Clinical evaluation of semiautonomous smart wheelchair architecture (Drive-Safe System) with visually impaired individuals.

    Science.gov (United States)

    Sharma, Vinod; Simpson, Richard C; LoPresti, Edmund F; Schmeler, Mark

    2012-01-01

    Nonambulatory, visually impaired individuals mostly rely on caregivers for their day-to-day mobility needs. The Drive-Safe System (DSS) is a modular, semiautonomous smart wheelchair system aimed at providing independent mobility to people with visual and mobility impairments. In this project, clinical evaluation of the DSS was performed in a controlled laboratory setting with individuals who have visual impairment but no mobility impairment. Their performance using DSS was compared with their performance using a standard cane for navigation assistance. Participants rated their subjective appraisal of the DSS by using the National Aeronautics and Space Administration-Task Load Index inventory. DSS significantly reduced the number and severity of collisions compared with using a cane alone and without increasing the time required to complete the task. Users rated DSS favorably; they experienced less physical demand when using the DSS, but did not feel any difference in perceived effort, mental demand, and level of frustration when using the DSS alone or along with a cane in comparison with using a cane alone. These findings suggest that the DSS can be a safe, reliable, and easy-to-learn and operate independent mobility solution for visually impaired wheelchair users.

  13. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  14. Moving attention - Evidence for time-invariant shifts of visual selective attention

    Science.gov (United States)

    Remington, R.; Pierce, L.

    1984-01-01

    Two experiments measured the time to shift spatial selective attention across the visual field to targets 2 or 10 deg from central fixation. A central arrow cued the most likely target location. The direction of attention was inferred from reaction times to expected, unexpected, and neutral locations. The development of a spatial attentional set with time was examined by presenting target probes at varying times after the cue. There were no effects of distance on the time course of the attentional set. Reaction times for far locations were slower than for near, but the effects of attention were evident by 150 msec in both cases. Spatial attention does not shift with a characteristic, fixed velocity. Rather, velocity is proportional to distance, resulting in a movement time that is invariant over the distances tested.

  15. Visual Data Exploration for Balance Quantification in Real-Time During Exergaming.

    Directory of Open Access Journals (Sweden)

    Venustiano Soancatl Aguilar

    Full Text Available Unintentional injuries are among the ten leading causes of death in older adults; falls cause 60% of these deaths. Despite their effectiveness to improve balance and reduce the risk of falls, balance training programs have several drawbacks in practice, such as lack of engaging elements, boring exercises, and the effort and cost of travelling, ultimately resulting in low adherence. Exergames, that is, digital games controlled by body movements, have been proposed as an alternative to improve balance. One of the main challenges for exergames is to automatically quantify balance during game-play in order to adapt the game difficulty according to the skills of the player. Here we perform a multidimensional exploratory data analysis, using visualization techniques, to find useful measures for quantifying balance in real-time. First, we visualize exergaming data, derived from 400 force plate recordings of 40 participants from 20 to 79 years and 10 trials per participant, as heat maps and violin plots to get quick insight into the nature of the data. Second, we extract known and new features from the data, such as instantaneous speed, measures of dispersion, turbulence measures derived from speed, and curvature values. Finally, we analyze and visualize these features using several visualizations such as a heat map, overlapping violin plots, a parallel coordinate plot, a projection of the two first principal components, and a scatter plot matrix. Our visualizations and findings suggest that heat maps and violin plots can provide quick insight and directions for further data exploration. The most promising measures to quantify balance in real-time are speed, curvature and a turbulence measure, because these measures show age-related changes in balance performance. The next step is to apply the present techniques to data of whole body movements as recorded by devices such as Kinect.

  16. Learning Visual Representations for Perception-Action Systems

    DEFF Research Database (Denmark)

    Piater, Justus; Jodogne, Sebastien; Detry, Renaud

    2011-01-01

    and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success......We discuss vision as a sensory modality for systems that effect actions in response to perceptions. While the internal representations informed by vision may be arbitrarily complex, we argue that in many cases it is advantageous to link them rather directly to action via learned mappings....... These arguments are illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split...

  17. Evaluating System Parameters on a Dragonfly using Simulation and Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jain, Nikhil [Univ. of Illinois, Urbana-Champaign, IL (United States); Livnat, Yarden [Univ. of Utah, Salt Lake City, UT (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2015-04-24

    The dragon y topology is becoming a popular choice for build- ing high-radix, low-diameter networks with high-bandwidth links. Even with a powerful network, preliminary experi- ments on Edison at NERSC have shown that for communica- tion heavy applications, job interference and thus presumably job placement remains an important factor. In this paper, we explore the e ects of job placement, job sizes, parallel workloads and network con gurations on network through- put to better understand inter-job interference. We use a simulation tool called Damsel y to model the network be- havior of Edison and study the impact of various system parameters on network throughput. Parallel workloads based on ve representative communication patters are used and the simulation studies on up to 131,072 cores are aided by a new visualization of the dragon y network.

  18. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  19. System Architecture of Small Unmanned Aerial System for Flight Beyond Visual Line-of-Sight

    Science.gov (United States)

    2015-09-17

    International Conference on Mechatronic and Embedded Systems and Applications (MESA 2011), 28-31 (August 2011) Maddalon Jeffrey M., Kelly J... SYSTEM ARCHITECTURE OF SMALL UNMANNED AERIAL SYSTEM FOR FLIGHT BEYOND VISUAL LINE-OF-SIGHT THESIS...is declared a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENV-MS-15-S-047 SYSTEM

  20. On disturbed time continuity in schizophrenia: an elementary impairment in visual perception?

    Directory of Open Access Journals (Sweden)

    Anne eGiersch

    2013-05-01

    Full Text Available Schizophrenia is associated with a series of visual perception impairments, which might impact on the patients’ every day life and be related to clinical symptoms. However, the heterogeneity of the visual disorders make it a challenge to understand both the mechanisms and the consequences of these impairments, i.e. the way patients experience the outer world. Based on earlier psychiatry literature, we argue that issues regarding time might shed a new light on the disorders observed in patients with schizophrenia. We will briefly review the mechanisms involved in the sense of time continuity and clinical evidence that they are impaired in patients with schizophrenia. We will then summarize a recent experimental approach regarding the coding of time-event structure in time, namely the ability to discriminate between simultaneous and asynchronous events. The use of an original method of analysis allowed us to distinguish between explicit and implicit judgements of synchrony. We showed that for SOAs below 20 ms neither patients nor controls fuse events in time. On the contrary subjects distinguish events at an implicit level even when judging them as synchronous. In addition, the implicit responses of patients and controls differ qualitatively. It is as if controls always put more weight on the last occurred event, whereas patients have a difficulty to follow events in time at an implicit level. In patients, there is a clear dissociation between results at short and large asynchronies, that suggest selective mechanisms for the implicit coding of time-event structure. These results might explain the disruption of the sense of time continuity in patients. We argue that this line of research might also help us to better understand the mechanisms of the visual impairments in patients and how they see their environment.

  1. Techniques for Fault Detection and Visualization of Telemetry Dependence Relationships for Root Cause Fault Analysis in Complex Systems

    Science.gov (United States)

    Guy, Nathaniel

    This thesis explores new ways of looking at telemetry data, from a time-correlative perspective, in order to see patterns within the data that may suggest root causes of system faults. It was thought initially that visualizing an animated Pearson Correlation Coefficient (PCC) matrix for telemetry channels would be sufficient to give new understanding; however, testing showed that the high dimensionality and inability to easily look at change over time in this approach impeded understanding. Different correlative techniques, combined with the time curve visualization proposed by Bach et al (2015), were adapted to visualize both raw telemetry and telemetry data correlations. Review revealed that these new techniques give insights into the data, and an intuitive grasp of data families, which show the effectiveness of this approach for enhancing system understanding and assisting with root cause analysis for complex aerospace systems.

  2. Real Time Visualization and Manipulation of the Metastatic Trajectory ofBreast Cancer Cell

    Science.gov (United States)

    2017-09-01

    AWARD NUMBER: W81XWH-13-1-0173 TITLE: Real-Time Visualization and Manipulation of the Metastatic Trajectory of Breast Cancer Cells ...of this work was to engineer breast cancer cells to irreversibly alter the genome of nearby cells through exosomal transfer of Cre recombinase from...the cancer cells to surrounding cells . Our goal was to use this study to activate green fluorescent protein in the host reporter cells in the

  3. Time-based forgetting in visual working memory reflects temporal distinctiveness, not decay

    OpenAIRE

    Souza Alessandra S.; Oberauer Klaus

    2015-01-01

    Is forgetting from working memory (WM) better explained by decay or interference? The answer to this question is the topic of an ongoing debate. Recently a number of studies showed that performance in tests of visual WM declines with an increasing unfilled retention interval. This finding was interpreted as revealing decay. Alternatively it can be explained by interference theories as an effect of temporal distinctiveness. According to decay theories forgetting depends on the absolute time el...

  4. A Scientometric Visualization Analysis for Night-Time Light Remote Sensing Research from 1991 to 2016

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2017-08-01

    Full Text Available In this paper, we conducted a scientometric analysis based on the Night-Time Light (NTL remote sensing related literature datasets retrieved from Science Citation Index Expanded and Social Science Citation Index in Web of Science core collection database. Using the methods of bibliometric and Social Network Analysis (SNA, we drew several conclusions: (1 NTL related studies have become a research hotspot, especially after 2011 when the second generation of NTL satellites, the Suomi National Polar-orbiting Partnership (S-NPP Satellite with the Visible Infrared Imaging Radiometer Suite (VIIRS sensor was on board. In the same period, the open-access policy of the long historical dataset of the first generation satellite Defense Meteorological Satellite Program’s Operational Linescan System (DMSP/OLS started. (2 Most related studies are conducted by authors from USA and China, and the USA takes the lead in the field. We identified the biggest research communities constructed by co-authorships and the related important authors and topics by SNA. (3 By the visualization and analysis of the topic evolution using the co-word and co-cited reference networks, we can clearly see that: the research topics change from hardware oriented studies to more real-world applications; and from the first generation of the satellite DMSP/OLS to the second generation of satellite S-NPP. Although the Day Night Band (DNB of the S-NPP exhibits higher spatial and radiometric resolution and better calibration conditions than the first generation DMSP/OLS, the longer historical datasets in DMSP/OLS are still important in long-term and large-scale human activity analysis. (4 In line with the intuitive knowledge, the NTL remote sensing related studies display stronger connections (such as interpretive frame, context, and academic purpose to the social sciences than the general remote sensing discipline. The citation trajectories are visualized based on the dual-maps, thus the

  5. Real-time deformation measurement using a transportable shearography system

    Science.gov (United States)

    Weijers, A. L.; van Brug, Hedser H.; Frankena, Hans J.

    1997-03-01

    A new system for deformation visualization has been developed, being a real time phase stepped shearing speckle interferometer. This system provides the possibility to measure quantitatively deformations of diffusely reflecting objects in an industrial environment. The main characteristics of this interferometer are its speed of operation and its reduced sensitivity to external disturbances. Apart from its semiconductor laser source, this system has a shoe-box size and is mounted on a tripod for easy handling during inspection. This paper describes the shearing speckle interferometry set-up, as it is developed at our laboratory and its potential for detecting defects.

  6. Sustained visual-spatial attention produces costs and benefits in response time and evoked neural activity.

    Science.gov (United States)

    Mangun, G R; Buck, L A

    1998-03-01

    This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.

  7. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  8. 14 CFR 1221.108 - Establishment of the NASA Unified Visual Communications System.

    Science.gov (United States)

    2010-01-01

    ... Visual Communications System. The NASA Graphics Coordinator will develop and issue changes and additions... Communications System. 1221.108 Section 1221.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE... Communications System § 1221.108 Establishment of the NASA Unified Visual Communications System. (a) The NASA...

  9. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    Science.gov (United States)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as

  10. The influence of time on task on mind wandering and visual working memory.

    Science.gov (United States)

    Krimsky, Marissa; Forster, Daniel E; Llabre, Maria M; Jha, Amishi P

    2017-12-01

    Working memory relies on executive resources for successful task performance, with higher demands necessitating greater resource engagement. In addition to mnemonic demands, prior studies suggest that internal sources of distraction, such as mind wandering (i.e., having off-task thoughts) and greater time on task, may tax executive resources. Herein, the consequences of mnemonic demand, mind wandering, and time on task were investigated during a visual working memory task. Participants (N=143) completed a delayed-recognition visual working memory task, with mnemonic load for visual objects manipulated across trials (1 item=low load; 2 items=high load) and subjective mind wandering assessed intermittently throughout the experiment using a self-report Likert-type scale (1=on-task, 6=off-task). Task performance (correct/incorrect response) and self-reported mind wandering data were evaluated by hierarchical linear modeling to track trial-by-trial fluctuations. Performance declined with greater time on task, and the rate of decline was steeper for high vs low load trials. Self-reported mind wandering increased over time, and significantly varied asa function of both load and time on task. Participants reported greater mind wandering at the beginning of the experiment for low vs. high load trials; however, with greater time on task, more mind wandering was reported during high vs. low load trials. These results suggest that the availability of executive resources in support of working memory maintenance processes fluctuates in a demand-sensitive manner with time on task, and may be commandeered by mind wandering. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Visual tracking strategies for intelligent vehicle highway systems

    Science.gov (United States)

    Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles

    1995-01-01

    The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.

  12. Application Of Expert System Techniques To A Visual Tracker

    Science.gov (United States)

    Myler, Harley R.; Thompson, Wiley E.; Flachs, Gerald M.

    1985-04-01

    A structure for visual tracking system is presented which relies on information developed from previous tracking scenarios stored in a knowledge base to enhance tracking performance. The system is comprised of a centroid tracker front end which supplies segmented image features to a data reduction algorithm which holds the reduced data in a temporary data base relation. This relation is then classified vio two separate modes, learn and track. Under learn mode, an external teacher-irector operator provides identification and weighting cues for membership in a long-term storage relation within a knowledge base. Track mode operates autonomously from the learn mode where the system determines feature validity by applying fuzzy set membership criteria to previously stored track information in the database. Results determined from the classification generate tracker directives which either enhance or permit current tracking to continue or cause the tracker to search for alternate targets based upon analysis of a global target tracking list. The classification algorithm is based on correlative analysis of the tracker's segmented output presentation after low pass filtering derives lower order harmonics of the feature. The fuzzy set membership criteria is based on size, rotation, Irame location, and past history of the feature. The first three factors are lin-ear operations on the spectra, while the last is generated as a context relation in the knowledge base. The context relation interlinks data between features to facilitate tracker operation during feature occlusion or presence of countermeasures.

  13. Equipment of visualization environment of a large-scale structural analysis system. Visualization using AVS/Express of an ADVENTURE system

    International Nuclear Information System (INIS)

    Miyazaki, Mikiya

    2004-02-01

    The data display work of visualization is done in many research fields, and a lot of special softwares for the specific purposes exist today. But such softwares have an interface to only a small number of solvers. In many simulations, data conversion for visualization software is required between analysis and visualization for the practical use. This report describes the equipment work of the data visualization environment where AVS/Express was installed in corresponding to many requests from the users of the large-scale structural analysis system which is prepared as an ITBL community software. This environment enables to use the ITBL visualization server as a visualization device after the computation on the ITBL computer. Moreover, a lot of use will be expected within the community in the ITBL environment by merging it into ITBL/AVS environment in the future. (author)

  14. Effect of drivers' age and push button locations on visual time off road, steering wheel deviation and safety perception.

    Science.gov (United States)

    Dukic, T; Hanson, L; Falkmer, T

    2006-01-15

    The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.

  15. I can see what you are saying: Auditory labels reduce visual search times.

    Science.gov (United States)

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A 'Universal Time' system for ASDEX upgrade

    International Nuclear Information System (INIS)

    Raupp, Gerhard; Cole, R.; Behler, K.; Fitzek, M.; Heimann, P.; Lohs, A.; Lueddecke, K.; Neu, G.; Schacht, J.; Treutterer, W.; Zasche, D.; Zehetbauer, Th.; Zilker, M.

    2003-01-01

    For the new generation of intelligent controllers for plasma diagnostics, discharge control and long-pulse experiment control a new time system supporting steady state real-time operation has been devised. A central unit counts time at nanosecond resolution, covering more than the experiment lifetime. The broadcast time information serves local units to perform application functions such as current time readout, trigger generation and sample time measurement. Time is treated as a precisely measured quantity like other physical quantities. Tagging all detected events and sampled values with measured times as [value; time]-entities facilitates real-time data analysis, steady state protocolling and time-sorted archiving

  17. Introduction to flow visualization system in SPARC test facility

    International Nuclear Information System (INIS)

    Lee, Wooyoung; Song, Simon; Na, Young Su; Hong, Seong Wan

    2016-01-01

    The released hydrogen can be accumulated and mixed by steam and air depending on containment conditions under severe accident, which generates flammable mixture. Hydrogen explosion induced by ignition source cause severe damage to a structure or facility. Hydrogen risk regarding mixing, distribution, and combustion has been identified by several expert groups and studied actively since TMI accident. A large-scale thermal-hydraulic experimental facility is required to simulate the complex severe accident phenomena in the containment building. We have prepared the test facility, called the SPARC (Spray, Aerosol, Recombiner, Combustion), to resolve the international open issues regarding hydrogen risk. Gas mixing and stratification test using helium instead of hydrogen and estimation of a stratification surface erosion of helium owing to the vertical jet flow will be performed in SPARC. The measurement system is need to observe the gas flow in the large scale test facility such as SPARC. The PIV (particle image velocimetry) system have been installed to visualize gas flow. We are preparing the test facility, called the SPARC, for estimation the thermal-hydraulic process of hydrogen in a closed containment building and the PIV system for quantitative assessment of gas flow. In particular, we will perform gas mixing and erosion of stratification surface test using helium which is the replacement of hydrogen. It will be evaluated by measuring 2D velocity field using the PIV system. The PIV system mainly consists of camera, laser and tracer particle. Expected maximum size of FOV is 750 x 750 mm 2 limited by focal length of lens and high power laser corresponding to 425mJ/pulse at 532 wavelength is required due to large FOV

  18. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  19. No counterpart of visual perceptual echoes in the auditory system.

    Directory of Open Access Journals (Sweden)

    Barkın İlhan

    Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.

  20. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    Science.gov (United States)

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.