WorldWideScience

Sample records for interactive 3d visualization

  1. Storytelling in Interactive 3D Geographic Visualization Systems

    Directory of Open Access Journals (Sweden)

    Matthias Thöny

    2018-03-01

    Full Text Available The objective of interactive geographic maps is to provide geographic information to a large audience in a captivating and intuitive way. Storytelling helps to create exciting experiences and to explain complex or otherwise hidden relationships of geospatial data. Furthermore, interactive 3D applications offer a wide range of attractive elements for advanced visual story creation and offer the possibility to convey the same story in many different ways. In this paper, we discuss and analyze storytelling techniques in 3D geographic visualizations so that authors and developers working with geospatial data can use these techniques to conceptualize their visualization and interaction design. Finally, we outline two examples which apply the given concepts.

  2. Interactive 3D Mars Visualization

    Science.gov (United States)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  3. Interactive 3D visualization for theoretical virtual observatories

    Science.gov (United States)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  4. Interactive 3D Visualization for Theoretical Virtual Observatories

    Science.gov (United States)

    Dykes, Tim; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-04-01

    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  5. Interactive WebGL-based 3D visualizations for EAST experiment

    International Nuclear Information System (INIS)

    Xia, J.Y.; Xiao, B.J.; Li, Dan; Wang, K.R.

    2016-01-01

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  6. Interactive WebGL-based 3D visualizations for EAST experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, K.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  7. Java 3D Interactive Visualization for Astrophysics

    Science.gov (United States)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  8. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    Science.gov (United States)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  9. Mental practice with interactive 3D visual aids enhances surgical performance.

    Science.gov (United States)

    Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo

    2017-10-01

    Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.

  10. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  11. Interactive visualization and analysis of 3D medical images for neurosurgery

    International Nuclear Information System (INIS)

    Miyazawa, Tatsuo; Otsuki, Taisuke.

    1994-01-01

    We propose a method that makes it possible to interactively rotate and zoom a volume-rendered object and to interactively manipulate a function for transferring data values to color and opacity. The method ray-traces a Value-Intensity-Strength volume (VIS volume) instead of a color-opacity volume, and uses an adaptive refinement technique in generating images. The VIS volume tracing method can reduce by 20-90 percent the computational time of re-calculation necessitated by changing the function for transferring data values to color and opacity, and can reduce the computational time of pre-processing by 20 percent. It can also reduce the required memory space by 40 percent. The combination of VIS volume tracing and adaptive refinement method makes it possible to interactively visualize and analyze 3D medical image data. Once we can see detailed image of 3D objects to determine their orientation, we can easily manipulate the viewing and rendering parameters even while viewing rough, blurred images. The increase in the computation time for generating full-resolution images by using the adaptive refinement technique is approximately five to ten percent. Its effectiveness is evaluated by using the results of visualization for some 3D medical image data. (author)

  12. Interactive Scientific Visualization in 3D Virtual Reality Model

    Directory of Open Access Journals (Sweden)

    Filip Popovski

    2016-11-01

    Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.

  13. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    Science.gov (United States)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  14. Interactive client side data visualization with d3.js

    Science.gov (United States)

    Rodzianko, A.; Versteeg, R.; Johnson, D. V.; Soltanian, M. R.; Versteeg, O. J.; Girouard, M.

    2015-12-01

    Geoscience data associated with near surface research and operational sites is increasingly voluminous and heterogeneous (both in terms of providers and data types - e.g. geochemical, hydrological, geophysical, modeling data, of varying spatiotemporal characteristics). Such data allows scientists to investigate fundamental hydrological and geochemical processes relevant to agriculture, water resources and climate change. For scientists to easily share, model and interpret such data requires novel tools with capabilities for interactive data visualization. Under sponsorship of the US Department of Energy, Subsurface Insights is developing the Predictive Assimilative Framework (PAF): a cloud based subsurface monitoring platform which can manage, process and visualize large heterogeneous datasets. Over the last year we transitioned our visualization method from a server side approach (in which images and animations were generated using Jfreechart and Visit) to a client side one that utilizes the D3 Javascript library. Datasets are retrieved using web service calls to the server, returned as JSON objects and visualized within the browser. Users can interactively explore primary and secondary datasets from various field locations. Our current capabilities include interactive data contouring and heterogeneous time series data visualization. While this approach is very powerful and not necessarily unique, special attention needs to be paid to latency and responsiveness issues as well as to issues as cross browser code compatibility so that users have an identical, fluid and frustration-free experience across different computational platforms. We gratefully acknowledge support from the US Department of Energy under SBIR Award DOE DE-SC0009732, the use of data from the Lawrence Berkeley National Laboratory (LBNL) Sustainable Systems SFA Rifle field site and collaboration with LBNL SFA scientists.

  15. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  16. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    Science.gov (United States)

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  17. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    KAUST Repository

    Bach, Benjamin

    2017-08-29

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user\\'s real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  18. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    Science.gov (United States)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  19. COMPARISON OF USER PERFORMANCE WITH INTERACTIVE AND STATIC 3D VISUALIZATION – PILOT STUDY

    Directory of Open Access Journals (Sweden)

    L. Herman

    2016-06-01

    Full Text Available Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  20. TOUCH INTERACTION WITH 3D GEOGRAPHICAL VISUALIZATION ON WEB: SELECTED TECHNOLOGICAL AND USER ISSUES

    Directory of Open Access Journals (Sweden)

    L. Herman

    2016-10-01

    Full Text Available The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users’ performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  1. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    Directory of Open Access Journals (Sweden)

    Carolin Helbig

    Full Text Available To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality, a user-friendly interface, and suitability for cooperative work.Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and

  2. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    Science.gov (United States)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points

  3. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  4. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    Science.gov (United States)

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  5. Data visualization with D3 and AngularJS

    CERN Document Server

    Körner, Christoph

    2015-01-01

    If you are a web developer with experience in AngularJS and want to implement interactive visualizations using D3.js, this book is for you. Knowledge of SVG or D3.js will give you an edge to get the most out of this book.

  6. Three-dimensional visualization of ensemble weather forecasts – Part 1: The visualization tool Met.3D (version 1.0

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2015-07-01

    Full Text Available We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output – 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment campaign.

  7. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  8. An overview of 3D software visualization.

    Science.gov (United States)

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  9. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  10. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  11. 3-D vision and figure-ground separation by visual cortex.

    Science.gov (United States)

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  12. Virtual inspector: a flexible visualizer for dense 3D scanned models

    OpenAIRE

    Callieri, Marco; Ponchio, Federico; Cignoni, Paolo; Scopigno, Roberto

    2008-01-01

    The rapid evolution of automatic shape acquisition technologies will make huge amount of sampled 3D data available in the near future. Cul- tural Heritage (CH) domain is one of the ideal fields of application of 3D scanned data, while some issues in the use of those data are: how to visualize at interactive rates and full quality on commodity computers; how to improve visualization ease of use; how to support the integrated visualization of a virtual 3D artwork and the multimedia data which t...

  13. 3D Planetary Data Visualization with CesiumJS

    Science.gov (United States)

    Larsen, K. W.; DeWolfe, A. W.; Nguyen, D.; Sanchez, F.; Lindholm, D. M.

    2017-12-01

    Complex spacecraft orbits and multi-instrument observations can be challenging to visualize with traditional 2D plots. To facilitate the exploration of planetary science data, we have developed a set of web-based interactive 3D visualizations for the MAVEN and MMS missions using the free CesiumJS library. The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN3D project allows playback of one day's orbit at a time, displaying the spacecraft's position and orientation. Selected science data sets can be overplotted on the orbit track, including vectors for magnetic field and ion flow velocities. We also provide an overlay the M-GITM model on the planet itself. MAVEN3D is available at the MAVEN public website at: https://lasp.colorado.edu/maven/sdc/public/pages/maven3d/ The Magnetospheric MultiScale Mission (MMS) consists of one hundred instruments on four spacecraft flying in formation around Earth, investigating the interactions between the solar wind and Earth's magnetic field. While the highest temporal resolution data isn't received and processed until later, continuous daily observations of the particle and field environments are made available as soon as they are received. Traditional `quick-look' static plots have long been the first interaction with data from a mission of this nature. Our new 3D Quicklook viewer allows data from all four spacecraft to be viewed in an interactive web application as soon as the data is ingested into the MMS Science Data Center, less than one day after collection, in order to better help identify scientifically interesting data.

  14. 3D Visualization Development of SIUE Campus

    Science.gov (United States)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  15. Visualizing planetary data by using 3D engines

    Science.gov (United States)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  16. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    Science.gov (United States)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  17. Geometric characterization and interactive 3D visualization of historical and cultural heritage in the province of Cáceres (Spain

    Directory of Open Access Journals (Sweden)

    José Manuel Naranjo

    2018-01-01

    Full Text Available The three-dimensional (3D visualization of historical and cultural heritage in the province of Cáceres is essential for tourism promotion. This study uses panoramic spherical photography and terrestrial laser scanning (TLS for the geometric characterization and cataloguing of sites of cultural interest, according to the principles of the Charter of Krakow. The benefits of this project include improved knowledge dissemination of the cultural heritage of Cáceres in a society that demands state-of-the-art tourist information. In this sense, this study has three specific aims: to develop a highly reliable methodology for modeling heritage based on a combination of non-destructive geomatics methods; to design and develop software modules for interactive 3D visualization of models; and to promote knowledge of the historical and cultural heritage of Cáceres by creating a hypermedia atlas accessible via the Internet. Through this free-of-charge hypermedia atlas, the tourist accesses 3D photographic and interactive scenes, videos created by 3D point clouds obtained from laser scanning and 3D models available for downloading in ASCII format, and thus acquire a greater knowledge of the touristic attractions in the province of Cáceres.

  18. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  19. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  1. How 3D immersive visualization is changing medical diagnostics

    Science.gov (United States)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  2. Amazing Space: Explanations, Investigations, & 3D Visualizations

    Science.gov (United States)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  3. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  4. Techniques and architectures for 3D interaction

    NARCIS (Netherlands)

    De Haan, G.

    2009-01-01

    Spatial scientific datasets are all around us, and 3D visualization is a powerful tool to explore details and structures within them. When dealing with complex spatial structures, interactive Virtual Reality (VR) systems can potentially improve exploration over desktop-based systems. However, from

  5. Experiencing 3D interactions in virtual reality and augmented reality

    NARCIS (Netherlands)

    Martens, J.B.; Qi, W.; Aliakseyeu, D.; Kok, A.J.F.; Liere, van R.; Hoven, van den E.; Ijsselsteijn, W.; Kortuem, G.; Laerhoven, van K.; McClelland, I.; Perik, E.; Romero, N.; Ruyter, de B.

    2004-01-01

    We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical

  6. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    Science.gov (United States)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  7. 3DProIN: Protein-Protein Interaction Networks and Structure Visualization.

    Science.gov (United States)

    Li, Hui; Liu, Chunmei

    2014-06-14

    3DProIN is a computational tool to visualize protein-protein interaction networks in both two dimensional (2D) and three dimensional (3D) view. It models protein-protein interactions in a graph and explores the biologically relevant features of the tertiary structures of each protein in the network. Properties such as color, shape and name of each node (protein) of the network can be edited in either 2D or 3D views. 3DProIN is implemented using 3D Java and C programming languages. The internet crawl technique is also used to parse dynamically grasped protein interactions from protein data bank (PDB). It is a java applet component that is embedded in the web page and it can be used on different platforms including Linux, Mac and Window using web browsers such as Firefox, Internet Explorer, Chrome and Safari. It also was converted into a mac app and submitted to the App store as a free app. Mac users can also download the app from our website. 3DProIN is available for academic research at http://bicompute.appspot.com.

  8. Virtual reality hardware for use in interactive 3D data fusion and visualization

    Science.gov (United States)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  9. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    Directory of Open Access Journals (Sweden)

    Claudia eHänel

    2014-05-01

    Full Text Available The visualization of the progression of brain tissue loss, which occurs in neurodegenerative diseases like corticobasal syndrome (CBS, is an important prerequisite to understand the course and the causes of this neurodegenerative disorder. Common workflows for visual analysis are often based on single 2D sections since in 3D visualizations more internally situated structures may be occluded by structures near the surface. The reduction of dimensions from 3D to 2D allows for an holistic view onto internal and external structures, but results in a loss of spatial information. Here, we present an application with two 3D visualization designs to resolve these challenges. First, in addition to the volume changes, the semi-transparent anatomy is displayed with an anatomical section and cortical areas for spatial orientation. Second, the principle of importance-driven volume rendering is adapted to give an unrestricted line-of-sight to relevant structures by means of a frustum-like cutout. To strengthen the benefits of the 3D visualization, we decided to provide the application next to standard desktop environments in immersive virtual environments with stereoscopic viewing as well. This improves the depth perception in general and in particular for the second design. Thus, the application presented in this work allows for aneasily comprehensible visual analysis of the extent of brain degeneration and the corresponding affected regions.

  10. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    Science.gov (United States)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  11. EARTHSCAPE, A MULTI-PURPOSE INTERACTIVE 3D GLOBE VIEWER FOR HYBRID DATA VISUALIZATION AND ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Sarthou

    2015-08-01

    Full Text Available The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane, raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  12. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  13. vrmlgen: An R Package for 3D Data Visualization on the Web

    Directory of Open Access Journals (Sweden)

    Enrico Glaab

    2010-10-01

    Full Text Available The 3-dimensional representation and inspection of complex data is a frequently used strategy in many data analysis domains. Existing data mining software often lacks functionality that would enable users to explore 3D data interactively, especially if one wishes to make dynamic graphical representations directly viewable on the web.In this paper we present vrmlgen, a software package for the statistical programming language R to create 3D data visualizations in web formats like the Virtual Reality Markup Language (VRML and LiveGraphics3D. vrmlgen can be used to generate 3D charts and bar plots, scatter plots with density estimation contour surfaces, and visualizations of height maps, 3D object models and parametric functions. For greater flexibility, the user can also access low-level plotting methods through a unified interface and freely group different function calls together to create new higher-level plotting methods. Additionally, we present a web tool allowing users to visualize 3D data online and test some of vrmlgen's features without the need to install any software on their computer.

  14. [3D-visualization by MRI for surgical planning of Wilms tumors].

    Science.gov (United States)

    Schenk, J P; Waag, K-L; Graf, N; Wunsch, R; Jourdan, C; Behnisch, W; Tröger, J; Günther, P

    2004-10-01

    To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4 - 6 mm slices. Additionally, a phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected.

  15. 3D-visualization by MRI for surgical planning of Wilms tumors

    International Nuclear Information System (INIS)

    Schenk, J.P.; Wunsch, R.; Jourdan, C.; Troeger, J.; Waag, K.-L.; Guenther, P.; Graf, N.; Behnisch, W.

    2004-01-01

    Purpose: To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. Materials and Methods: In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4-6 mm slices. Additionally, phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. Results: In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. Conclusion: For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected. (orig.)

  16. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    International Nuclear Information System (INIS)

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-01-01

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  17. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  18. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  19. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2002-01-01

    The author introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The author emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  20. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2001-01-01

    This paper introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The paper emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  1. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  2. Visualization of documents and concepts in neuroinformatics with the 3D-SE viewer

    Directory of Open Access Journals (Sweden)

    Antoine P Naud

    2007-11-01

    Full Text Available A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.

  3. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    Science.gov (United States)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  4. The GPlates Portal: Cloud-based interactive 3D and 4D visualization of global geological and geophysical data and models in a browser

    Science.gov (United States)

    Müller, Dietmar; Qin, Xiaodong; Sandwell, David; Dutkiewicz, Adriana; Williams, Simon; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2017-04-01

    stimulate teaching and learning and novel avenues of inquiry. This technology offers many future opportunities for providing additional functionality, especially on-the-fly big data analytics. Müller, R.D., Qin, X., Sandwell, D.T., Dutkiewicz, A., Williams, S.E., Flament, N., Maus, S. and Seton, M, 2016, The GPlates Portal: Cloud-based interactive 3D visualization of global geophysical and geological data in a web browser, PLoS ONE 11(3): e0150883. doi:10.1371/ journal.pone.0150883

  5. 3D Immersive Visualization: An Educational Tool in Geosciences

    Science.gov (United States)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  6. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    Science.gov (United States)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  7. Realistic terrain visualization based on 3D virtual world technology

    Science.gov (United States)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  8. Mvox: Interactive 2-4D medical image and graphics visualization software

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten

    1996-01-01

    Mvox is a new tool for visualization, segmentation and manipulation of a wide range of 2-4D grey level and colour images, and 3D surface graphics, which has been developed at the Department of Mathematical Modelling, Technical University of Denmark. The principal idea behind the software has been...... to provide a flexible tool that is able to handle all the kinds of data that are typically used in a research environment for medical imaging and visualization. At the same time the software should be easy to use and have a consistent interface providing locally only the functions relevant to the context....... This has been achieved by using Unix standards such as X/Motif/OpenGL and conforming to modern standards of interactive windowed programs...

  9. 3D Visualization for Planetary Missions

    Science.gov (United States)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  10. Web based Interactive 3D Learning Objects for Learning Management Systems

    Directory of Open Access Journals (Sweden)

    Stefan Hesse

    2012-02-01

    Full Text Available In this paper, we present an approach to create and integrate interactive 3D learning objects of high quality for higher education into a learning management system. The use of these resources allows to visualize topics, such as electro-technical and physical processes in the interior of complex devices. This paper addresses the challenge of combining rich interactivity and adequate realism with 3D exercise material for distance elearning.

  11. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    Science.gov (United States)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    . Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  12. Interactive 4D Visualization of Sediment Transport Models

    Science.gov (United States)

    Butkiewicz, T.; Englert, C. M.

    2013-12-01

    Coastal sediment transport models simulate the effects that waves, currents, and tides have on near-shore bathymetry and features such as beaches and barrier islands. Understanding these dynamic processes is integral to the study of coastline stability, beach erosion, and environmental contamination. Furthermore, analyzing the results of these simulations is a critical task in the design, placement, and engineering of coastal structures such as seawalls, jetties, support pilings for wind turbines, etc. Despite the importance of these models, there is a lack of available visualization software that allows users to explore and perform analysis on these datasets in an intuitive and effective manner. Existing visualization interfaces for these datasets often present only one variable at a time, using two dimensional plan or cross-sectional views. These visual restrictions limit the ability to observe the contents in the proper overall context, both in spatial and multi-dimensional terms. To improve upon these limitations, we use 3D rendering and particle system based illustration techniques to show water column/flow data across all depths simultaneously. We can also encode multiple variables across different perceptual channels (color, texture, motion, etc.) to enrich surfaces with multi-dimensional information. Interactive tools are provided, which can be used to explore the dataset and find regions-of-interest for further investigation. Our visualization package provides an intuitive 4D (3D, time-varying) visualization of sediment transport model output. In addition, we are also integrating real world observations with the simulated data to support analysis of the impact from major sediment transport events. In particular, we have been focusing on the effects of Superstorm Sandy on the Redbird Artificial Reef Site, offshore of Delaware Bay. Based on our pre- and post-storm high-resolution sonar surveys, there has significant scour and bedform migration around the

  13. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  14. Use of Colour and Interactive Animation in Learning 3D Vectors

    Science.gov (United States)

    Iskander, Wejdan; Curtis, Sharon

    2005-01-01

    This study investigated the effects of two computer-implemented techniques (colour and interactive animation) on learning 3D vectors. The participants were 43 female Saudi Arabian high school students. They were pre-tested on 3D vectors using a paper questionnaire that consisted of calculation and visualization types of questions. The students…

  15. Interactive initialization of 2D/3D rigid registration

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Ren Hui; Güler, Özgür [The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children' s National Medical Center, Washington, DC 20010 (United States); Kürklüoglu, Mustafa [Department of Cardiac Surgery, Children' s National Medical Center, Washington, DC 20010 (United States); Lovejoy, John [Department of Orthopaedic Surgery and Sports Medicine, Children' s National Medical Center, Washington, DC 20010 (United States); Yaniv, Ziv, E-mail: ZYaniv@childrensnational.org [The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children' s National Medical Center, Washington, DC 20010 and Departments of Pediatrics and Radiology, George Washington University, Washington, DC 20037 (United States)

    2013-12-15

    Purpose: Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. Methods: The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. Results: In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. Conclusions: Based on

  16. Interactive initialization of 2D/3D rigid registration

    International Nuclear Information System (INIS)

    Gong, Ren Hui; Güler, Özgür; Kürklüoglu, Mustafa; Lovejoy, John; Yaniv, Ziv

    2013-01-01

    Purpose: Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. Methods: The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. Results: In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. Conclusions: Based on the

  17. 2-D and 3-D Visualization of Many-to-Many Relationships

    Directory of Open Access Journals (Sweden)

    SeungJin Lim

    2017-08-01

    Full Text Available With the unprecedented wave of Big Data, the importance of information visualization is catching greater momentum. Understanding the underlying relationships between constituent objects is becoming a common task in every branch of science, and visualization of such relationships is a critical part of data analysis. While the techniques for the visualization of binary relationships are widespread, visualization techniques for ternary or higher relationships are lacking. In this paper, we propose a 3-D visualization primitive which is suitable for such relationships. The design goals of the primitive are discussed, and the effectiveness of the proposed visual primitive with respect to information communication is demonstrated in a 3-D visualization environment.

  18. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    Science.gov (United States)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  19. Development of an environment for 3D visualization of riser dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bernardes Junior, Joao Luiz; Martins, Clovis de Arruda [Universidade de Sao Paulo (USP), SP (Brazil). Escola Politecnica]. E-mails: joao.bernardes@poli.usp.br; cmartins@usp.br

    2006-07-01

    This paper describes the merging of Virtual Reality and Scientific Visualization techniques in the development of Riser View, a multi platform 3D environment for real time, interactive visualization of riser dynamics. Its features, architecture, unusual collision detection algorithm and how up was customized for the project are discussed. Using Open GL through VRK, the software is able to make use of the resources available in most modern Graphics. Acceleration Hardware to improve performance. IUP/LED allows for native loo-and-feel in MS-Windows or Linux platform. The paper discusses conflicts that arise between scientific visualization and aspects such as realism and immersion, and how the visualization is prioritized. (author)

  20. Data visualization with D3.js cookbook

    CERN Document Server

    Zhu, Nick Qi

    2013-01-01

    Packed with practical recipes, this is a step-by-step guide to learning data visualization with D3 with the help of detailed illustrations and code samples.If you are a developer familiar with HTML, CSS, and JavaScript, and you wish to get the most out of D3, then this book is for you. This book can also serve as a desktop quick-reference guide for experienced data visualization developers.

  1. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    Science.gov (United States)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  2. Visualizing the process of interaction in a 3D environment

    Science.gov (United States)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  3. 2D/3D Visual Tracker for Rover Mast

    Science.gov (United States)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  4. Computerized diagnostic data analysis and 3-D visualization

    International Nuclear Information System (INIS)

    Schuhmann, D.; Haubner, M.; Krapichler, C.; Englmeier, K.H.; Seemann, M.; Schoepf, U.J.; Gebicke, K.; Reiser, M.

    1998-01-01

    Purpose: To survey methods for 3D data visualization and image analysis which can be used for computer based diagnostics. Material and methods: The methods available are explained in short terms and links to the literature are presented. Methods which allow basic manipulation of 3D data are windowing, rotation and clipping. More complex methods for visualization of 3D data are multiplanar reformation, volume projections (MIP, semi-transparent projections) and surface projections. Methods for image analysis comprise local data transformation (e.g. filtering) and definition and application of complex models (e.g. deformable models). Results: Volume projections produce an impression of the 3D data set without reducing the data amount. This supports the interpretation of the 3D data set and saves time in comparison to any investigation which requires examination of all slice images. More advanced techniques for visualization, e.g. surface projections and hybrid rendering visualize anatomical information to a very detailed extent, but both techniques require the segmentation of the structures of interest. Image analysis methods can be used to extract these structures (e.g. an organ) from the image data. Discussion: At the present time volume projections are robust and fast enough to be used routinely. Surface projections can be used to visualize complex and presegmented anatomical features. (orig.) [de

  5. 3D-visualization by MRI for surgical planning of Wilms tumors; 3-D-Visualisierung in der MRT zur Operationsplanung von Wilms-Tumoren

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, J.P.; Wunsch, R.; Jourdan, C.; Troeger, J. [Universitaetsklinik Heidelberg (Germany). Abteilung Paediatrische Radiologie; Waag, K.-L.; Guenther, P. [Universitaetsklinik Heidelberg (Germany). Abteilung Kinderchirurgie; Graf, N. [Universitaetsklinik Homburg (Germany). Abteilung Paediatrische Haematologie und Onkologie; Behnisch, W. [Universitaetsklinik Heidelberg (Germany). Abteilung Paediatrische Haematologie und Onkologie

    2004-10-01

    Purpose: To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. Materials and Methods: In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4-6 mm slices. Additionally, phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. Results: In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. Conclusion: For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected. (orig.)

  6. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    Science.gov (United States)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  7. Diffractive optical element for creating visual 3D images.

    Science.gov (United States)

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-02

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc.

  8. 3D Visualization of Trees Based on a Sphere-Board Model

    Directory of Open Access Journals (Sweden)

    Jiangfeng She

    2018-01-01

    Full Text Available Because of the smooth interaction of tree systems, the billboard and crossed-plane techniques of image-based rendering (IBR have been used for tree visualization for many years. However, both the billboard-based tree model (BBTM and the crossed-plane tree model (CPTM have several notable limitations; for example, they give an impression of slicing when viewed from the top side, and they produce an unimpressive stereoscopic effect and insufficient lighted effects. In this study, a sphere-board-based tree model (SBTM is proposed to eliminate these defects and to improve the final visual effects. Compared with the BBTM or CPTM, the proposed SBTM uses one or more sphere-like 3D geometric surfaces covered with a virtual texture, which can present more details about the foliage than can 2D planes, to represent the 3D outline of a tree crown. However, the profile edge presented by a continuous surface is overly smooth and regular, and when used to delineate the outline of a tree crown, it makes the tree appear very unrealistic. To overcome this shortcoming and achieve a more natural final visual effect of the tree model, an additional process is applied to the edge of the surface profile. In addition, the SBTM can better support lighted effects because of its cubic geometrical features. Interactive visualization effects for a single tree and a grove are presented in a case study of Sabina chinensis. The results show that the SBTM can achieve a better compromise between realism and performance than can the BBTM or CPTM.

  9. Map Learning with a 3D Printed Interactive Small-Scale Model: Improvement of Space and Text Memorization in Visually Impaired Students

    Directory of Open Access Journals (Sweden)

    Stéphanie Giraud

    2017-06-01

    Full Text Available Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students’ autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs.

  10. Map Learning with a 3D Printed Interactive Small-Scale Model: Improvement of Space and Text Memorization in Visually Impaired Students.

    Science.gov (United States)

    Giraud, Stéphanie; Brock, Anke M; Macé, Marc J-M; Jouffrais, Christophe

    2017-01-01

    Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs) to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students' autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs) which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs.

  11. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    Science.gov (United States)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic

  12. 3D Visualization of Global Ocean Circulation

    Science.gov (United States)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  13. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  14. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  15. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    Science.gov (United States)

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  16. S3D depth-axis interaction for video games: performance and engagement

    Science.gov (United States)

    Zerebecki, Chris; Stanfield, Brodie; Hogue, Andrew; Kapralos, Bill; Collins, Karen

    2013-03-01

    Game developers have yet to embrace and explore the interactive stereoscopic 3D medium. They typically view stereoscopy as a separate mode that can be disabled throughout the design process and rarely develop game mechanics that take advantage of the stereoscopic 3D medium. What if we designed games to be S3D-specific and viewed traditional 2D viewing as a separate mode that can be disabled? The design choices made throughout such a process may yield interesting and compelling results. Furthermore, we believe that interaction within a stereoscopic 3D environment is more important than the visual experience itself and therefore, further exploration is needed to take into account the interactive affordances presented by stereoscopic 3D displays. Stereoscopic 3D displays allow players to perceive objects at different depths, thus we hypothesize that designing a core mechanic to take advantage of this viewing paradigm will create compelling content. In this paper, we describe Z-Fighter a game that we have developed that requires the player to interact directly along the stereoscopic 3D depth axis. We also outline an experiment conducted to investigate the performance, perception, and enjoyment of this game in stereoscopic 3D vs. traditional 2D viewing.

  17. 3-D visualization of ensemble weather forecasts - Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Science.gov (United States)

    Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present the application of interactive 3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and forecast wind field resolution. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (three to seven days before take-off).

  18. 3d visualization of atomistic simulations on every desktop

    Science.gov (United States)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  19. 3d visualization of atomistic simulations on every desktop

    International Nuclear Information System (INIS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-01-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given

  20. Dissemination of 3D Visualizations of Complex Function Data for the NIST Digital Library of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Qiming Wang

    2007-03-01

    Full Text Available The National Institute of Standards and Technology (NIST is developing a digital library to replace the widely used National Bureau of Standards Handbook of Mathematical Functions published in 1964. The NIST Digital Library of Mathematical Functions (DLMF will include formulas, methods of computation, references, and links to software for over forty functions. It will be published both in hardcopy format and as a website featuring interactive navigation, a mathematical equation search, 2D graphics, and dynamic interactive 3D visualizations. This paper focuses on the development and accessibility of the 3D visualizations for the digital library. We examine the techniques needed to produce accurate computations of function data, and through a careful evaluation of several prototypes, we address the advantages and disadvantages of using various technologies, including the Virtual Reality Modeling Language (VRML, interactive embedded graphics, and video capture to render and disseminate the visualizations in an environment accessible to users on various platforms.

  1. Research on fine management and visualization of ancient architectures based on integration of 2D and 3D GIS technology

    International Nuclear Information System (INIS)

    Jun, Yan; Shaohua, Wang; Jiayuan, Li; Qingwu, Hu

    2014-01-01

    Aimed at ancient architectures which own the characteristics of huge data quantity, fine-grained and high-precise, a 3D fine management and visualization method for ancient architectures based on the integration of 2D and 3D GIS is proposed. Firstly, after analysing various data types and characters of digital ancient architectures, main problems and key technologies existing in the 2D and 3D data management are discussed. Secondly, data storage and indexing model of digital ancient architecture based on 2D and 3D GIS integration were designed and the integrative storage and management of 2D and 3D data were achieved. Then, through the study of data retrieval method based on the space-time indexing and hierarchical object model of ancient architecture, 2D and 3D interaction of fine-grained ancient architectures 3D models was achieved. Finally, take the fine database of Liangyi Temple belonging to Wudang Mountain as an example, fine management and visualization prototype of 2D and 3D integrative digital ancient buildings of Liangyi Temple was built and achieved. The integrated management and visual analysis of 10GB fine-grained model of the ancient architecture was realized and a new implementation method for the store, browse, reconstruction, and architectural art research of ancient architecture model was provided

  2. Interactive Collaborative Visualization in the Geosciences

    Science.gov (United States)

    Bollig, E. F.; Kadlec, B. J.; Erlebacher, G.; Yuen, D. A.; Palchuk, Y. M.

    2004-12-01

    Datasets in the earth sciences continue growing in size due to higher experimental resolving power, and numerical simulations at higher resolutions. Over the last several years, an increasing number of scientists have turned to visualization to represent their vast datasets in a meaningful fashion. In most cases, datasets are downloaded and then visualized on a local workstation with 2D or 3D software packages. However, it becomes inconvenient to download datasets of several gigabytes unless network bandwidth is sufficiently high (10 Gbits/sec). We are investigating the use of Virtual Network Computing (VNC) to provide interactive three-dimensional visualization services to the user community. Specialized software [1,2] enables OpenGL-based visualization software to capitalize on the hardware capabilities of modern graphics cards and transfer session information to clients through the VNC protocol. The virtue of this software is that it does not require any changes to visualization software. Session information is displayed within java applets, enabling the use of a wide variety of clients, including hand-held devices. The VNC protocol makes collaboration and interaction between multiple users possible. We demonstrate the collaborative VNC system with the commercial 3D visualization system Amira (http://www.tgs.com) and the open source VTK (http://www.vtk.org) over a 100 Mbit network. We also present ongoing work for integrating VNC within the Naradabrokering Grid environment. [1] Stegmaier, S. and Magallon, M. and T. Ertl, "A Generic Solution for Hardware-Accelerated Remote Visualization," Joint Eurographics -- IEEE TCVG Symposium on Visualization, 2002. [2] VirtualGL--3D without boundaries http://virtualgl.sourceforge.net/installation.htm

  3. Enhancing Nuclear Training with 3D Visualization

    International Nuclear Information System (INIS)

    Gagnon, V.; Gagnon, B.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  4. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    Directory of Open Access Journals (Sweden)

    Vamsi Kiran Adhikarla

    2015-04-01

    Full Text Available This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  5. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    Directory of Open Access Journals (Sweden)

    Wilbert A. McClay

    2015-09-01

    Full Text Available Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  6. Gesture Interaction Browser-Based 3D Molecular Viewer.

    Science.gov (United States)

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  7. A Novel and Freely Available Interactive 3d Model of the Internal Carotid Artery.

    Science.gov (United States)

    Valera-Melé, Marc; Puigdellívol-Sánchez, Anna; Mavar-Haramija, Marija; Juanes-Méndez, Juan A; San-Román, Luis; de Notaris, Matteo; Prats-Galino, Alberto

    2018-03-05

    We describe a new and freely available 3D interactive model of the intracranial internal carotid artery (ICA) and the skull base that also allows to display and compare its main segment classifications. High-resolution 3D human angiography (isometric voxel's size 0.36 mm) and Computed Tomography angiography images were exported to Virtual Reality Modeling Language (VRML) format for processing in a 3D software platform and embedding in a 3D Portable Document Format (PDF) document that can be freely downloaded at http://diposit.ub.edu/dspace/handle/2445/112442 and runs under Acrobat Reader on Mac and Windows computers and Windows 10 tablets. The 3D-PDF allows for visualisation and interaction through JavaScript-based functions (including zoom, rotation, selective visualization and transparentation of structures or a predefined sequence view of the main segment classifications if desired). The ICA and its main branches and loops, the Gasserian ganglion, the petrolingual ligament and the proximal and distal dural rings within the skull base environment (anterior and posterior clinoid processes, silla turcica, ethmoid and sphenoid bones, orbital fossae) may be visualized from different perspectives. This interactive 3D-PDF provides virtual views of the ICA and becomes an innovative tool to improve the understanding of the neuroanatomy of the ICA and surrounding structures.

  8. Visualization of the lower cranial nerves by 3D-FIESTA

    International Nuclear Information System (INIS)

    Okumura, Yusuke; Suzuki, Masayuki; Takemura, Akihiro; Tsujii, Hideo; Kawahara, Kazuhiro; Matsuura, Yukihiro; Takada, Tadanori

    2005-01-01

    MR cisternography has been introduced for use in neuroradiology. This method is capable of visualizing tiny structures such as blood vessels and cranial nerves in the cerebrospinal fluid (CSF) space because of its superior contrast resolution. The cranial nerves and small vessels are shown as structures of low intensity surrounded by marked hyperintensity of the CSF. In the present study, we evaluated visualization of the lower cranial nerves (glossopharyngeal, vagus, and accessory) by the three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) sequence and multiplanar reformation (MPR) technique. The subjects were 8 men and 3 women, ranging in age from 21 to 76 years (average, 54 yeas). We examined the visualization of a total of 66 nerves in 11 subjects by 3D-FIESTA. The results were classified into four categories ranging from good visualization to non-visualization. In all cases, all glossopharyngeal and vagus nerves were identified to some extent, while accessory nerves were visualized either partially or entirely in only 16 cases. The total visualization rate was about 91%. In conclusion, 3D-FIESTA may be a useful method for visualization of the lower cranial nerves. (author)

  9. A STUDY ON USING 3D VISUALIZATION AND SIMULATION PROGRAM (OPTITEX 3D ON LEATHER APPAREL

    Directory of Open Access Journals (Sweden)

    Ork Nilay

    2016-05-01

    Full Text Available Leather is a luxury garment. Design, material, labor, fitting and time costs are very effective on the production cost of the consumer leather good. 3D visualization and simulation programs which are getting popular in textile industry can be used for material, labor and time saving in leather apparel. However these programs have a very limited use in leather industry because leather material databases are not sufficient as in textile industry. In this research, firstly material properties of leather and textile fabric were determined by using both textile and leather physical test methods, and interpreted and introduced in the program. Detailed measures of an experimental human body were measured from a 3D body scanner. An avatar was designed according to these measurements. Then a prototype dress was made by using Computer Aided Design-CAD program for designing the patterns. After the pattern making, OptiTex 3D visualization and simulation program was used to visualize and simulate the dresses. Additionally the leather and cotton fabric dresses were sewn in real life. Then the visual and real life dresses were compared and discussed. 3D virtual prototyping seems a promising potential in future manufacturing technologies by evaluating the fitting of garments in a simple and quick way, filling the gap between 3D pattern design and manufacturing, providing virtual demonstrations to customers.

  10. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  11. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  12. Visualization of RELAP5-3D best estimate code

    International Nuclear Information System (INIS)

    Mesina, G.L.

    2004-01-01

    The Idaho National Engineering Laboratory has developed a number of nuclear plant analysis codes such as RELAP5-3D, SCDAP/RELAP5-3D, and FLUENT/RELAP5-3D that have multi-dimensional modeling capability. The output of these codes is very difficult to analyze without the aid of visualization tools. The RELAP5-3D Graphical User Interface (RGUI) displays these calculations on plant images, functional diagrams, graphs, and by other means. These representations of the data enhance the analysts' ability to recognize plant behavior visually and reduce the difficulty of analyzing complex three-dimensional models. This paper describes the Graphical User Interface system for the RELAP5-3D suite of Best Estimate codes. The uses of the Graphical User Interface are illustrated. Examples of user problems solved by use of this interface are given. (author)

  13. Visualization of cranial nerves by MR cisternography using 3D FASE. Comparison with 2D FSE

    Energy Technology Data Exchange (ETDEWEB)

    Asakura, Hirofumi; Nakano, Satoru; Togami, Taro [Kagawa Medical School, Miki (Japan)] (and others)

    2001-03-01

    MR cisternography using 3D FASE was compared with that of 2D FSE in regard to visualization of normal cranial nerves. In a phantom study, contrast-to-noise ratio (C/N) of fine structures was better in 3D FASE images than in 2D FSE. In clinical cases, visualization of trigeminal nerve, abducent nerve, and facial/vestibulo-cochlear nerve were evaluated. Each cranial nerve was visualized better in 3D FASE images than in 2D FSE, with a significant difference (p<0.05). (author)

  14. Visualization of cranial nerves by MR cisternography using 3D FASE. Comparison with 2D FSE

    International Nuclear Information System (INIS)

    Asakura, Hirofumi; Nakano, Satoru; Togami, Taro

    2001-01-01

    MR cisternography using 3D FASE was compared with that of 2D FSE in regard to visualization of normal cranial nerves. In a phantom study, contrast-to-noise ratio (C/N) of fine structures was better in 3D FASE images than in 2D FSE. In clinical cases, visualization of trigeminal nerve, abducent nerve, and facial/vestibulo-cochlear nerve were evaluated. Each cranial nerve was visualized better in 3D FASE images than in 2D FSE, with a significant difference (p<0.05). (author)

  15. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    Science.gov (United States)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  16. Impacts of a CAREER Award on Advancing 3D Visualization in Geology Education

    Science.gov (United States)

    Billen, M. I.

    2011-12-01

    CAREER awards provide a unique opportunity to develop educational activities as an integrated part of one's research activities. This CAREER award focused on developing interactive 3D visualization tools to aid geology students in improving their 3D visualization skills. Not only is this a key skill for field geologists who need to visualize unseen subsurface structures, but it is also an important aspect of geodynamic research into the processes, such as faulting and viscous flow, that occur during subduction. Working with an undergraduate student researcher and using the KeckCAVES developed volume visualization code 3DVisualizer, we have developed interactive visualization laboratory exercises (e.g., Discovering the Rule of Vs) and a suite of mini-exercises using illustrative 3D geologic structures (e.g., syncline, thrust fault) that students can explore (e.g., rotate, slice, cut-away) to understand how exposure of these structures at the surface can provide insight into the subsurface structure. These exercises have been integrated into the structural geology curriculum and made available on the web through the KeckCAVES Education website as both data-and-code downloads and pre-made movies. One of the main challenges of implementing research and education activities through the award is that progress must be made on both throughout the award period. Therefore, while our original intent was to use subduction model output as the structures in the educational models, delays in the research results required that we develop these models using other simpler input data sets. These delays occurred because one of the other goals of the CAREER grant is to allow the faculty to take their research in a new direction, which may certainly lead to transformative science, but can also lead to more false-starts as the challenges of doing the new science are overcome. However, having created the infrastructure for the educational components, use of the model results in future

  17. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  18. A Topological Framework for Interactive Queries on 3D Models in the Web

    Science.gov (United States)

    Figueiredo, Mauro; Rodrigues, José I.; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications. PMID:24977236

  19. A Topological Framework for Interactive Queries on 3D Models in the Web

    Directory of Open Access Journals (Sweden)

    Mauro Figueiredo

    2014-01-01

    Full Text Available Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  20. Exploring interaction with 3D volumetric displays

    Science.gov (United States)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  1. Visualizing 3D data obtained from microscopy on the Internet.

    Science.gov (United States)

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  2. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    Science.gov (United States)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  3. SlicerAstro : A 3-D interactive visual analytics tool for HI data

    NARCIS (Netherlands)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Fillion-Robin, J. C.; Yu, L.

    SKA precursors are capable of detecting hundreds of galaxies in HI in a single 12 h pointing. In deeper surveys one will probe more easily faint HI structures, typically located in the vicinity of galaxies, such as tails, filaments, and extraplanar gas. The importance of interactive visualization in

  4. Wearable Gaze Trackers: Mapping Visual Attention in 3D

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Stets, Jonathan Dyssel; Suurmets, Seidi

    2017-01-01

    gaze trackers allows respondents to move freely in any real world 3D environment, removing the previous restrictions. In this paper we propose a novel approach for processing visual attention of respondents using mobile wearable gaze trackers in a 3D environment. The pipeline consists of 3 steps...

  5. Characteristics of visual fatigue under the effect of 3D animation.

    Science.gov (United States)

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  6. 3D visualization and stereographic techniques for medical research and education.

    Science.gov (United States)

    Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F

    2001-01-01

    While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a

  7. Teaching seismic methods using interactive 3D Earth globe

    Science.gov (United States)

    Weeraratne, D. S.; Rogers, D. B.

    2011-12-01

    Instructional techniques for study of seismology are greatly enhanced by three dimensional (3D) visualization. Seismic rays that pass through the Earth's interior are typically viewed in 2D slices of the Earth's interior. Here we present the use of a 3D Earth globe manufactured by Real World Globes. This globe displays a dry-erase high resolution glossy topography and bathymetry from the Smith and Sandwell data archives at its surface for interactive measurements and hands-on marking of many seismic observations such as earthquake locations, source-receiver distances, surface wave propagation, great circle paths, ocean circulation patterns, airplane trajectories, etc.. A new interactive feature (designed collaboratively with geoscientists) allows cut away and disassembly of sections of the exterior shell revealing a full cross section depicting the Earth's interior layers displayed to scale with a dry-erase work board. The interior panel spins to any azimuth and provides a depth measurement scale to allow exact measurements and marking of earthquake depths, true seismic ray path propagation, ray path bottoming depths, shadow zones, and diffraction patterns. A demo of this globe and example activities will be presented.

  8. Memory and visual search in naturalistic 2D and 3D environments.

    Science.gov (United States)

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.

  9. Visualization research of 3D radiation field based on Delaunay triangulation

    International Nuclear Information System (INIS)

    Xie Changji; Chen Yuqing; Li Shiting; Zhu Bo

    2011-01-01

    Based on the characteristics of the three dimensional partition, the triangulation of discrete date sets is improved by the method of point-by-point insertion. The discrete data for the radiation field by theoretical calculation or actual measurement is restructured, and the continuous distribution of the radiation field data is obtained. Finally, the 3D virtual scene of the nuclear facilities is built with the VR simulation techniques, and the visualization of the 3D radiation field is also achieved by the visualization mapping techniques. It is shown that the method combined VR and Delaunay triangulation could greatly improve the quality and efficiency of 3D radiation field visualization. (authors)

  10. Creating 3D visualizations of MRI data: A brief guide

    Science.gov (United States)

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  11. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    Science.gov (United States)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  12. KENO3D visualization tool for KENO V.a geometry models

    International Nuclear Information System (INIS)

    Bowman, S.M.; Horwedel, J.E.

    1999-01-01

    The standardized computer analyses for licensing evaluations (SCALE) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a three-dimensional Monte Carlo criticality computer code. Criticality safety analysis often require detailed modeling of complex geometries. Checking the accuracy of these models can be enhanced by effective visualization tools. To address this need, ORNL has recently developed a powerful state-of-the-art visualization tool called KENO3D that enables KENO V.a users to interactively display their three-dimensional geometry models. The interactive options include the following: (1) having shaded or wireframe images; (2) showing standard views, such as top view, side view, front view, and isometric three-dimensional view; (3) rotating the model; (4) zooming in on selected locations; (5) selecting parts of the model to display; (6) editing colors and displaying legends; (7) displaying properties of any unit in the model; (8) creating cutaway views; (9) removing units from the model; and (10) printing image or saving image to common graphics formats

  13. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    Science.gov (United States)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  14. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    Science.gov (United States)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  15. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    Science.gov (United States)

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  16. Highly Realistic 3D Presentation Agents with Visual Attention Capability

    NARCIS (Netherlands)

    Hoekstra, A; Prendinger, H.; Bee, N.; Heylen, Dirk K.J.; Ishizuka, M.

    2007-01-01

    This research proposes 3D graphical agents in the role of virtual presenters with a new type of functionality – the capability to process and respond to visual attention of users communicated by their eye movements. Eye gaze is an excellent clue to users’ attention, visual interest, and visual

  17. On 3D Geo-visualization of a Mine Surface Plant and Mine Roadway

    Institute of Scientific and Technical Information of China (English)

    WANG Yunjia; FU Yongming; FU Erjiang

    2007-01-01

    Constructing the 3D virtual scene of a coal mine is the objective requirement for modernizing and processing information on coal mining production. It is also the key technology to establish a "digital mine". By exploring current worldwide research, software and hardware tools and application demands, combined with the case study site (the Dazhuang mine of Pingdingshan coal group), an approach for 3D geo-visualization of a mine surface plant and mine roadway is deeply discussed. In this study, the rapid modeling method for a large range virtual scene based on Arc/Info and SiteBuilder3D is studied, and automatic generation of a 3D scene from a 2D scene is realized. Such an automatic method which can convert mine roadway systems from 2D to 3D is realized for the Dazhuang mine. Some relevant application questions are studied, including attribute query, coordinate query, distance measure, collision detection and the dynamic interaction between 2D and 3D virtual scenes in the virtual scene of a mine surface plant and mine roadway. A prototype system is designed and developed.

  18. Integrating 3D Visualization and GIS in Planning Education

    Science.gov (United States)

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  19. DNA Encoding Training Using 3D Gesture Interaction.

    Science.gov (United States)

    Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2017-01-01

    The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.

  20. Research on steady-state visual evoked potentials in 3D displays

    Science.gov (United States)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  1. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  2. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  3. Webs on the Web (WOW): 3D visualization of ecological networks on the WWW for collaborative research and education

    Science.gov (United States)

    Yoon, Ilmi; Williams, Rich; Levine, Eli; Yoon, Sanghyuk; Dunne, Jennifer; Martinez, Neo

    2004-06-01

    This paper describes information technology being developed to improve the quality, sophistication, accessibility, and pedagogical simplicity of ecological network data, analysis, and visualization. We present designs for a WWW demonstration/prototype web site that provides database, analysis, and visualization tools for research and education related to food web research. Our early experience with a prototype 3D ecological network visualization guides our design of a more flexible architecture design. 3D visualization algorithms include variable node and link sizes, placements according to node connectivity and tropic levels, and visualization of other node and link properties in food web data. The flexible architecture includes an XML application design, FoodWebML, and pipelining of computational components. Based on users" choices of data and visualization options, the WWW prototype site will connect to an XML database (Xindice) and return the visualization in VRML format for browsing and further interactions.

  4. 3D Stereo Visualization for Mobile Robot Tele-Guide

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  5. Interactive Visual Analysis within Dynamic Ocean Models

    Science.gov (United States)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  6. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    Science.gov (United States)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  7. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  8. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    Science.gov (United States)

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  9. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  10. Enhancing Nuclear Newcomer Training with 3D Visualization Learning Tools

    International Nuclear Information System (INIS)

    Gagnon, V.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  11. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    Science.gov (United States)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition

  12. When the display matters: A multifaceted perspective on 3D geovisualizations

    Directory of Open Access Journals (Sweden)

    Juřík Vojtěch

    2017-04-01

    Full Text Available This study explores the influence of stereoscopic (real 3D and monoscopic (pseudo 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant’s motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision. The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  13. A Study of Layout, Rendering, and Interaction Methods for Immersive Graph Visualization.

    Science.gov (United States)

    Kwon, Oh-Hyun; Muelder, Chris; Lee, Kyungwon; Ma, Kwan-Liu

    2016-07-01

    Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.

  14. Tiny but complex - interactive 3D visualization of the interstitial acochlidian gastropod Pseudunela cornuta (Challis, 1970

    Directory of Open Access Journals (Sweden)

    Heß Martin

    2009-09-01

    in a mesopsammic gastropod, though functionally not yet fully understood. Such organ complexity as shown herein by interactive 3D visualization is not plesiomorphically maintained from a larger, benthic ancestor, but newly evolved within small marine hedylopsacean ancestors of P. cornuta. The common picture of general organ regression within mesopsammic acochlidians thus is valid for microhedylacean species only.

  15. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  16. Effects of 3D sound on visual scanning

    NARCIS (Netherlands)

    Veltman, J.A.; Bronkhorst, A.W.; Oving, A.B.

    2000-01-01

    An experiment was conducted in a flight simulator to explore the effectiveness of a 3D sound display as support to visual information from a head down display (HDD). Pilots had to perform two main tasks in separate conditions: intercepting and following a target jet. Performance was measured for

  17. Interaction for visualization

    CERN Document Server

    Tominski, Christian

    2015-01-01

    Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction.The goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. This view comprises five key as

  18. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    Science.gov (United States)

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  19. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    Science.gov (United States)

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied

  20. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    Science.gov (United States)

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  1. OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.

    Science.gov (United States)

    Zhou, Guangyan; Xia, Jianguo

    2018-06-07

    Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.

  2. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  3. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  4. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    Science.gov (United States)

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  5. Immersive Interaction, Manipulation and Analysis of Large 3D Datasets for Planetary and Earth Sciences

    Science.gov (United States)

    Pariser, O.; Calef, F.; Manning, E. M.; Ardulov, V.

    2017-12-01

    We will present implementation and study of several use-cases of utilizing Virtual Reality (VR) for immersive display, interaction and analysis of large and complex 3D datasets. These datasets have been acquired by the instruments across several Earth, Planetary and Solar Space Robotics Missions. First, we will describe the architecture of the common application framework that was developed to input data, interface with VR display devices and program input controllers in various computing environments. Tethered and portable VR technologies will be contrasted and advantages of each highlighted. We'll proceed to presenting experimental immersive analytics visual constructs that enable augmentation of 3D datasets with 2D ones such as images and statistical and abstract data. We will conclude by presenting comparative analysis with traditional visualization applications and share the feedback provided by our users: scientists and engineers.

  6. A Case Study in Astronomical 3-D Printing: The Mysterious Eta Carinae

    OpenAIRE

    Madura, Thomas I.

    2016-01-01

    3-D printing moves beyond interactive 3-D graphics and provides an excellent tool for both visual and tactile learners, since 3-D printing can now easily communicate complex geometries and full color information. Some limitations of interactive 3-D graphics are also alleviated by 3-D printable models, including issues of limited software support, portability, accessibility, and sustainability. We describe the motivations, methods, and results of our work on using 3-D printing (1) to visualize...

  7. A LOW-COST AND LIGHTWEIGHT 3D INTERACTIVE REAL ESTATE-PURPOSED INDOOR VIRTUAL REALITY APPLICATION

    Directory of Open Access Journals (Sweden)

    K. Ozacar

    2017-11-01

    Full Text Available Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  8. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    Science.gov (United States)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  9. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    Science.gov (United States)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  10. 3-D interactive visualisation tools for Hi spectral line imaging

    NARCIS (Netherlands)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2016-01-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is

  11. A possible concept for an interactive 3D visualization system for training and planning of liver surgery

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Darvann, T.; Damgaard, K.

    1996-01-01

    A demonstration of a fully interactive (20 frames per second) 3D graphics display of the blood vessels supporting the biliary tree and bileduct, automatically segmented from CT data, is given. Emphasis is on speed of interaction, modularity and programmer friendliness of graphics programming...

  12. Interactive visualization of APT data at full fidelity

    International Nuclear Information System (INIS)

    Bryden, Aaron; Broderick, Scott; Suram, Santosh K.; Kaluskar, Kaustubh; LeSar, Richard; Rajan, Krishna

    2013-01-01

    Understanding the impact of noise and incomplete data is a critical need for using atom probe tomography effectively. Although many tools and techniques have been developed to address this problem, visualization of the raw data remains an important part of this process. In this paper, we present two contributions to the visualization of data acquired through atom probe tomography. First, we describe the application of a rendering technique, ray-cast spherical impostors, that enables the interactive rendering of large numbers (as large as 10 million plus) of pixel perfect, lit spheres representing individual atoms. This technique is made possible by the use of a consumer-level graphics processing unit (GPU), and it yields an order of magnitude improvement both in render quality and speed over techniques previously used to render spherical glyphs in this domain. Second, we present an interactive tool that allows the user to mask, filter, and colorize the data in real time to help them understand and visualize a precise subset and properties of the raw data. We demonstrate the effectiveness of our tool through benchmarks and an example that shows how the ability to interactively render large numbers of spheres, combined with the use of filters and masks, leads to improved understanding of the three-dimensional (3D) and incomplete nature of atom probe data. This improvement arises from the ability of lit spheres to more effectively show the 3D position and the local spatial distribution of individual atoms than what is possible with point or isosurface renderings. The techniques described in this paper serve to introduce new rendering and interaction techniques that have only recently become practical as well as new ways of interactively exploring the raw data. - Highlights: ► Application of spherical impostor rendering to atom probe data visualization. ► Presented an interactive tool for visualizing atom probe tomography data. ► Presented a comparison of

  13. Visualization of Hyperconjugation and Subsequent Structural Distortions through 3D Printing of Crystal Structures.

    Science.gov (United States)

    Mithila, Farha J; Oyola-Reynoso, Stephanie; Thuo, Martin M; Atkinson, Manza Bj

    2016-01-01

    Structural distortions due to hyperconjugation in organic molecules, like norbornenes, are well captured through X-ray crystallographic data, but are sometimes difficult to visualize especially for those applying chemical knowledge and are not chemists. Crystal structure from the Cambridge database were downloaded and converted to .stl format. The structures were then printed at the desired scale using a 3D printer. Replicas of the crystal structures were accurately reproduced in scale and any resulting distortions were clearly visible from the macroscale models. Through space interactions or effect of through space hyperconjugation was illustrated through loss of symmetry or distortions thereof. The norbornene structures exhibits distortion that cannot be observed through conventional ball and stick modelling kits. We show that 3D printed models derived from crystallographic data capture even subtle distortions in molecules. We translate such crystallographic data into scaled-up models through 3D printing.

  14. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  15. Hybrid wide-angle viewing-endoscopic vitrectomy using a 3D visualization system

    Directory of Open Access Journals (Sweden)

    Kita M

    2018-02-01

    Full Text Available Mihori Kita, Yuki Mori, Sachiyo Hama Department of Ophthalmology, National Organization Kyoto Medical Center, Kyoto, Japan Purpose: To introduce a hybrid wide-angle viewing-endoscopic vitrectomy, which we have reported, using a 3D visualization system developed recently. Subjects and methods: We report a single center, retrospective, consecutive surgical case series of 113 eyes that underwent 25 G vitrectomy (rhegmatogenous retinal detachment or proliferative vitreoretinopathy, 49 eyes; epiretinal membrane, 18 eyes; proliferative diabetic retinopathy, 17 eyes; vitreous opacity or vitreous hemorrhage, 11 eyes; macular hole, 11 eyes; vitreomacular traction syndrome, 4 eyes; and luxation of intraocular lens, 3 eyes. Results: This system was successfully used to perform hybrid vitrectomy in the difficult cases, such as proliferative vitreoretinopathy and proliferative diabetic retinopathy. Conclusion: Hybrid wide-angle viewing-endoscopic vitrectomy using a 3D visualization system appears to be a valuable and promising method for managing various types of vitreoretinal disease. Keywords: 25 G vitrectomy, endoscope, wide-angle viewing system, 3D visualization system, hybrid

  16. Development of 3D browsing and interactive web system

    Science.gov (United States)

    Shi, Xiaonan; Fu, Jian; Jin, Chaolin

    2017-09-01

    In the current market, users need to download specific software or plug-ins to browse the 3D model, and browsing the system may be unstable, and it cannot be 3D model interaction issues In order to solve this problem, this paper presents a solution to the interactive browsing of the model in the server-side parsing model, and when the system is applied, the user only needs to input the system URL and upload the 3D model file to operate the browsing The server real-time parsing 3D model, the interactive response speed, these completely follows the user to walk the minimalist idea, and solves the current market block 3D content development question.

  17. Cyclin D3 interacts with vitamin D receptor and regulates its transcription activity

    International Nuclear Information System (INIS)

    Jian Yongzhi; Yan Jun; Wang Hanzhou; Chen Chen; Sun Maoyun; Jiang Jianhai; Lu Jieqiong; Yang Yanzhong; Gu Jianxin

    2005-01-01

    D-type cyclins are essential for the progression through the G1 phase of the cell cycle. Besides serving as cell cycle regulators, D-type cyclins were recently reported to have transcription regulation functions. Here, we report that cyclin D3 is a new interacting partner of vitamin D receptor (VDR), a member of the superfamily of nuclear receptors for steroid hormones, thyroid hormone, and the fat-soluble vitamins A and D. The interaction was confirmed with methods of yeast two-hybrid system, in vitro binding analysis and in vivo co-immunoprecipitation. Cyclin D3 interacted with VDR in a ligand-independent manner, but treatment of the ligand, 1,25-dihydroxyvitamin D3, strengthened the interaction. Confocal microscopy analysis showed that ligand-activated VDR led to an accumulation of cyclin D3 in the nuclear region. Cyclin D3 up-regulated transcriptional activity of VDR and this effect was counteracted by overexpression of CDK4 and CDK6. These findings provide us a new clue to understand the transcription regulation functions of D-type cyclins

  18. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent

    2014-01-01

    , we propose an alternative appearance-driven approach which rst extracts 2D primitives justi ed by Marr's primal sketch, which are \\accumulated" over multiple views and the most stable ones are \\promoted" to 3D visual primitives. The 3D promoted primitives represent both structure and appearance...

  19. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    Science.gov (United States)

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  20. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    Science.gov (United States)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  1. Magnetic assembly of 3D cell clusters: visualizing the formation of an engineered tissue.

    Science.gov (United States)

    Ghosh, S; Kumar, S R P; Puri, I K; Elankumaran, S

    2016-02-01

    Contactless magnetic assembly of cells into 3D clusters has been proposed as a novel means for 3D tissue culture that eliminates the need for artificial scaffolds. However, thus far its efficacy has only been studied by comparing expression levels of generic proteins. Here, it has been evaluated by visualizing the evolution of cell clusters assembled by magnetic forces, to examine their resemblance to in vivo tissues. Cells were labeled with magnetic nanoparticles, then assembled into 3D clusters using magnetic force. Scanning electron microscopy was used to image intercellular interactions and morphological features of the clusters. When cells were held together by magnetic forces for a single day, they formed intercellular contacts through extracellular fibers. These kept the clusters intact once the magnetic forces were removed, thus serving the primary function of scaffolds. The cells self-organized into constructs consistent with the corresponding tissues in vivo. Epithelial cells formed sheets while fibroblasts formed spheroids and exhibited position-dependent morphological heterogeneity. Cells on the periphery of a cluster were flattened while those within were spheroidal, a well-known characteristic of connective tissues in vivo. Cells assembled by magnetic forces presented visual features representative of their in vivo states but largely absent in monolayers. This established the efficacy of contactless assembly as a means to fabricate in vitro tissue models. © 2016 John Wiley & Sons Ltd.

  2. A workflow for the 3D visualization of meteorological data

    Science.gov (United States)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  3. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    Science.gov (United States)

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  4. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    Directory of Open Access Journals (Sweden)

    Teresa eSollfrank

    2015-08-01

    Full Text Available A repetitive movement practice by motor imagery (MI can influence motor cortical excitability in the electroencephalogram (EEG. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007. This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during motor imagery. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronisation (ERD of the upper alpha band (10-12 Hz over the sensorimotor cortices thereby potentially improving MI based BCI protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb motor imagery present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (2D vs. 3D. The largest upper alpha band power decrease was obtained during motor imagery after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D visualization modality group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during MI. Realistic visual feedback, consistent with the participant’s motor imagery, might be helpful for accomplishing successful motor imagery and the use of such feedback may assist in making BCI a more natural interface for motor imagery based BCI rehabilitation.

  5. Human tooth pulp anatomy visualization by 3D magnetic resonance microscopy

    International Nuclear Information System (INIS)

    Sustercic, Dusan; Sersa, Igor

    2012-01-01

    Precise assessment of dental pulp anatomy is of an extreme importance for a successful endodontic treatment. As standard radiographs of teeth provide very limited information on dental pulp anatomy, more capable methods are highly appreciated. One of these is 3D magnetic resonance (MR) microscopy of which diagnostic capabilities in terms of a better dental pulp anatomy assessment were evaluated in the study. Twenty extracted human teeth were scanned on a 2.35 T MRI system for MR microscopy using the 3D spin-echo method that enabled image acquisition with isotropic resolution of 100 μm. The 3D images were then post processed by ImageJ program (NIH) to obtain advanced volume rendered views of dental pulps. MR microscopy at 2.35 T provided accurate data on dental pulp anatomy in vitro. The data were presented as a sequence of thin 2D slices through the pulp in various orientations or as volume rendered 3D images reconstructed form arbitrary view-points. Sequential 2D images enabled only an approximate assessment of the pulp, while volume rendered 3D images were more precise in visualization of pulp anatomy and clearly showed pulp diverticles, number of pulp canals and root canal anastomosis. This in vitro study demonstrated that MR microscopy could provide very accurate 3D visualization of dental pulp anatomy. A possible future application of the method in vivo may be of a great importance for the endodontic treatment

  6. 3D MODELLING AND VISUALIZATION BASED ON THE UNITY GAME ENGINE – ADVANTAGES AND CHALLENGES

    Directory of Open Access Journals (Sweden)

    I. Buyuksalih

    2017-11-01

    Full Text Available 3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine as highlighted in this paper.

  7. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    Science.gov (United States)

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  8. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Bowman, S.M.

    2000-01-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system

  9. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  10. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong

    2012-02-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists\\' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  11. Development of a 3-D Nuclear Event Visualization Program Using Unity

    Science.gov (United States)

    Kuhn, Victoria

    2017-09-01

    Simulations have become increasingly important for science and there is an increasing emphasis on the visualization of simulations within a Virtual Reality (VR) environment. Our group is exploring this capability as a visualization tool not just for those curious about science, but also for educational purposes for K-12 students. Using data collected in 3-D by a Time Projection Chamber (TPC), we are able to visualize nuclear and cosmic events. The Unity game engine was used to recreate the TPC to visualize these events and construct a VR application. The methods used to create these simulations will be presented along with an example of a simulation. I will also present on the development and testing of this program, which I carried out this past summer at MSU as part of an REU program. We used data from the S πRIT TPC, but the software can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.

  12. Design and implementation of a 3D ocean virtual reality and visualization engine

    Science.gov (United States)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  13. Shape Perception in 3-D Scatterplots Using Constant Visual Angle Glyphs

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2012-01-01

    When viewing 3-D scatterplots in immersive virtual environments, one commonly encountered problem is the presence of clutter, which obscures the view of any structures of interest in the visualization. In order to solve this problem, we propose to render the 3-D glyphs such that they always cover...... to regular perspective glyphs, especially when a large amount of clutter is present. Furthermore, our evaluation revealed that perception of structures in 3-D scatterplots is significantly affected by the volumetric density of the glyphs in the plot....

  14. The interactive presentation of 3D information obtained from reconstructed datasets and 3D placement of single histological sections with the 3D portable document format

    NARCIS (Netherlands)

    de Boer, Bouke A.; Soufan, Alexandre T.; Hagoort, Jaco; Mohun, Timothy J.; van den Hoff, Maurice J. B.; Hasman, Arie; Voorbraak, Frans P. J. M.; Moorman, Antoon F. M.; Ruijter, Jan M.

    2011-01-01

    Interpretation of the results of anatomical and embryological studies relies heavily on proper visualization of complex morphogenetic processes and patterns of gene expression in a three-dimensional (3D) context. However, reconstruction of complete 3D datasets is time consuming and often researchers

  15. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  16. iview: an interactive WebGL visualizer for protein-ligand complex.

    Science.gov (United States)

    Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon

    2014-02-25

    Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.

  17. MRI segmentation by active contours model, 3D reconstruction, and visualization

    Science.gov (United States)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  18. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  19. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,' ' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  20. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    Science.gov (United States)

    Kadioglu, S.

    2009-04-01

    remains. Interactive interpretation was done by using sub-blocks of the transparent 3D volume. The opacity function coefficients were increased while deep sub-blocks were visualized. Therefore amplitudes of electromagnetic wave field were controlled by changing opacity coefficients with depth. The transparent 3D visualization provided to identify the archaeological remains on native locations with depth in a 3D volume. According to the visualization results, in the governorship agora, the broken Roman Street was identified under the remnants of Ottoman, Seljuk's and Byzantine periods respectively at 4m depths and a colonnaded portico was determined in the governorship garden. Diggings encouraged the 3D image results. In the Augustus temple, very complex remnant structures including cubbies were determined in front of the east wall of the temple. The remnant walls very near to the surface were continued so deep in the 3D image. The transparent 3D visualization results overlapped with the digging results of the Augustus temple.

  1. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    Science.gov (United States)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  2. Immersive 3D Visualization of Astronomical Data

    Science.gov (United States)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  3. NECTAR: Simulation and Visualization in a 3D Collaborative Environment

    NARCIS (Netherlands)

    Law, Y.W.; Chan, K.Y.

    For simulation and visualization in a 3D collaborative environment, an architecture called the Nanyang Experimental CollaboraTive ARchitecture (NECTAR) has been developed. The objective is to support multi-user collaboration in a virtual environment with an emphasis on cost-effectiveness and

  4. Expanding the Interaction Lexicon for 3D Graphics

    National Research Council Canada - National Science Library

    Pierce, Jeffrey S

    2001-01-01

    .... This research makes several contributions to 3D interaction and virtual reality. The Voodoo Dolls technique is a new technique for manipulating objects in immersive 3D environments in which users manipulate hand-held copies of objects...

  5. MSX-3D: a tool to validate 3D protein models using mass spectrometry.

    Science.gov (United States)

    Heymann, Michaël; Paramelle, David; Subra, Gilles; Forest, Eric; Martinez, Jean; Geourjon, Christophe; Deléage, Gilbert

    2008-12-01

    The technique of chemical cross-linking followed by mass spectrometry has proven to bring valuable information about the protein structure and interactions between proteic subunits. It is an effective and efficient way to experimentally investigate some aspects of a protein structure when NMR and X-ray crystallography data are lacking. We introduce MSX-3D, a tool specifically geared to validate protein models using mass spectrometry. In addition to classical peptides identifications, it allows an interactive 3D visualization of the distance constraints derived from a cross-linking experiment. Freely available at http://proteomics-pbil.ibcp.fr

  6. New generation of 3D desktop computer interfaces

    Science.gov (United States)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  7. VAP3D: a software for dosimetric analysis and visualization of phantons

    International Nuclear Information System (INIS)

    Lima, Lindeval Fernandes de; Lima, Fernando Roberto de Andrade

    2011-01-01

    The anthropomorphic models used in computational dosimetry of the ionizing radiation, usually called voxel phantom, are produced from image stacks CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) obtained from patient or volunteer scanning. These phantoms are the geometry to be radiated in the computing arrangements of exposure, using a Monte Carlo code, allowing the estimation of the energy deposited in each voxel of the virtual body. From these data collected in the simulation, it is possible to evaluate the average absorbed dose in various organs and tissues radiosensitive cataloged by the International Commission on Radiological Protection (ICRP). Therefore, a computational model of the exhibition is constituted primarily by the Monte Carlo code to simulate the transport, deposition and interaction of radiation and the phantom being irradiated. The construction of voxel phantoms requires computer skills like a transformation format of images, compression of 2D images for 3D image construction, quantization, resampling and image segmentation, among others. Hardly the computational dosimetry researcher finds all these skills into a single software and often this results in a decrease in the pace of their research or the use, sometimes inadequate, the alternative tools. This paper presents the VAP3D (Visualization and Analysis of Phantoms), a software developed with Qt/VTK with C++, in order to operationalize some of the tasks mentioned above. The current version has been based on DIP software (Digital Imaging Processing), containing the File menu, Conversions and tools, where the user interacts with the software. (author)

  8. 3D visualization of geo-scientific data for research and development purposes

    International Nuclear Information System (INIS)

    Mangeot, A.; Tabani, P.; Yven, B.; Dewonck, S.; Napier, B.; Waston, C.J.; Baker, G.R.; Shaw, R.P.

    2012-01-01

    Document available in extended abstract form only. In recent years national geoscience organizations have increasingly utilized 3D model data as an output to the stakeholder community. Advances in both software and hardware have led to an increasing use of 3D depictions of geoscience data alongside the standard 2D data formats such as maps and GIS data. By characterizing geoscience data in 3D, knowledge transfer between geo-scientists and stakeholders is improved as the mindset and thought processes are communicated more effectively in a 3D model than in a 2D flat file format. 3D models allow the user to understand the conceptual basis of the 2D data and aids the decision making process at local, regional and national scales. In April 29 2009 a Memorandum of Understanding has been signed between BGS and Andra in order to provide an improved mechanism for technical cooperation and collaboration in the Earth sciences. A specific agreement was signed the 1 December 2009 to evaluate the capacity of a 3D software called GeoVisionary to represent the Underground research Laboratory and its environment. GeoVisionary is the result of collaboration between Virtalis and the British Geological Survey. Combining a powerful data engine with a virtual geological tool-kit enables geo-scientists to visualize, analyze and share large datasets seamlessly in an immersive, real time environment A typical GeoVisionary environment contains one or more the following: 3D terrain files, Aerial photography, Bitmap overlays of specialized data, Vector shapes and outlines, 3D object Models. The key benefits are: Continuously stream geometry and photography in real time, Visualise 2D GIS data in immersive 3D stereo, Diverse datasets in a single environment, 'Fly' to any part of the data in seconds, Infinitely scalable, Prepare and evaluate before you begin fieldwork, Enhance team-working and increased efficiency of field operations, Clearer communication of results. Now, the 3D model has been

  9. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    Directory of Open Access Journals (Sweden)

    Bhavnani Suresh K

    2010-11-01

    Full Text Available Abstract Background In a recent study, two-dimensional (2D network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method revealed that genes implicated in many diseases (non-specific genes tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks.

  10. Visualization of protein interaction networks: problems and solutions

    Directory of Open Access Journals (Sweden)

    Agapito Giuseppe

    2013-01-01

    possibility to interact with external databases. Results Currently, many tools are available and it is not easy for the users choosing one of them. Some tools offer sophisticated 2D and 3D network visualization making available many layout algorithms, others tools are more data-oriented and support integration of interaction data coming from different sources and data annotation. Finally, some specialistic tools are dedicated to the analysis of pathways and cellular processes and are oriented toward systems biology studies, where the dynamic aspects of the processes being studied are central. Conclusion A current trend is the deployment of open, extensible visualization tools (e.g. Cytoscape, that may be incrementally enriched by the interactomics community with novel and more powerful functions for PIN analysis, through the development of plug-ins. On the other hand, another emerging trend regards the efficient and parallel implementation of the visualization engine that may provide high interactivity and near real-time response time, as in NAViGaTOR. From a technological point of view, open-source, free and extensible tools, like Cytoscape, guarantee a long term sustainability due to the largeness of the developers and users communities, and provide a great flexibility since new functions are continuously added by the developer community through new plug-ins, but the emerging parallel, often closed-source tools like NAViGaTOR, can offer near real-time response time also in the analysis of very huge PINs.

  11. Technical Note: Reliability of Suchey-Brooks and Buckberry-Chamberlain methods on 3D visualizations from CT and laser scans

    DEFF Research Database (Denmark)

    Villa, Chiara; Buckberry, Jo; Cattaneo, Cristina

    2013-01-01

    Previous studies have reported that the ageing method of Suchey-Brooks (pubic bone) and some of the features applied by Lovejoy et al. and Buckberry-Chamberlain (auricular surface) can be confidently performed on 3D visualizations from CT-scans. In this study, seven observers applied the Suchey......-Brooks and the Buckberry-Chamberlain methods on 3D visualizations based on CT-scans and, for the first time, on 3D visualizations from laser scans. We examined how the bone features can be evaluated on 3D visualizations and whether the different modalities (direct observations of bones, 3D visualization from CT......-observer agreement was obtained in the evaluation of the pubic bone in all modalities. In 3D visualizations of the auricular surfaces, transverse organization and apical changes could be evaluated, although with high inter-observer variability; micro-, macroporosity and surface texture were very difficult to score...

  12. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  13. USER–APPROPRIATE VIEWER FOR HIGH RESOLUTION INTERACTIVE ENGAGEMENT WITH 3D DIGITAL CULTURAL ARTEFACTS

    Directory of Open Access Journals (Sweden)

    D. Gillespie

    2013-07-01

    Full Text Available Three dimensional (3D laser scanning is an important documentation technique for cultural heritage. This technology has been adopted from the engineering and aeronautical industry and is an invaluable tool for the documentation of objects within museum collections (La Pensée, 2008. The datasets created via close range laser scanning are extremely accurate and the created 3D dataset allows for a more detailed analysis in comparison to other documentation technologies such as photography. The dataset can be used for a range of different applications including: documentation; archiving; surface monitoring; replication; gallery interactives; educational sessions; conservation and visualization. However, the novel nature of a 3D dataset is presenting a rather unique challenge with respect to its sharing and dissemination. This is in part due to the need for specialised 3D software and a supported graphics card to display high resolution 3D models. This can be detrimental to one of the main goals of cultural institutions, which is to share knowledge and enable activities such as research, education and entertainment. This has limited the presentation of 3D models of cultural heritage objects to mainly either images or videos. Yet with recent developments in computer graphics, increased internet speed and emerging technologies such as Adobe's Stage 3D (Adobe, 2013 and WebGL (Khronos, 2013, it is now possible to share a dataset directly within a webpage. This allows website visitors to interact with the 3D dataset allowing them to explore every angle of the object, gaining an insight into its shape and nature. This can be very important considering that it is difficult to offer the same level of understanding of the object through the use of traditional mediums such as photographs and videos. Yet this presents a range of problems: this is a very novel experience and very few people have engaged with 3D objects outside of 3D software packages or games

  14. User-Appropriate Viewer for High Resolution Interactive Engagement with 3d Digital Cultural Artefacts

    Science.gov (United States)

    Gillespie, D.; La Pensée, A.; Cooper, M.

    2013-07-01

    Three dimensional (3D) laser scanning is an important documentation technique for cultural heritage. This technology has been adopted from the engineering and aeronautical industry and is an invaluable tool for the documentation of objects within museum collections (La Pensée, 2008). The datasets created via close range laser scanning are extremely accurate and the created 3D dataset allows for a more detailed analysis in comparison to other documentation technologies such as photography. The dataset can be used for a range of different applications including: documentation; archiving; surface monitoring; replication; gallery interactives; educational sessions; conservation and visualization. However, the novel nature of a 3D dataset is presenting a rather unique challenge with respect to its sharing and dissemination. This is in part due to the need for specialised 3D software and a supported graphics card to display high resolution 3D models. This can be detrimental to one of the main goals of cultural institutions, which is to share knowledge and enable activities such as research, education and entertainment. This has limited the presentation of 3D models of cultural heritage objects to mainly either images or videos. Yet with recent developments in computer graphics, increased internet speed and emerging technologies such as Adobe's Stage 3D (Adobe, 2013) and WebGL (Khronos, 2013), it is now possible to share a dataset directly within a webpage. This allows website visitors to interact with the 3D dataset allowing them to explore every angle of the object, gaining an insight into its shape and nature. This can be very important considering that it is difficult to offer the same level of understanding of the object through the use of traditional mediums such as photographs and videos. Yet this presents a range of problems: this is a very novel experience and very few people have engaged with 3D objects outside of 3D software packages or games. This paper

  15. Ricostruzione di una scena urbana 3D utilizzando VisualSfM.

    Directory of Open Access Journals (Sweden)

    Laura Inzerillo

    2013-10-01

    Full Text Available Le tecniche di computer vision oggi danno la possibilità di costruire in maniera rapida e automatica modelli 3D dettagliati a partire da dataset fotografici. La comunità accademica ha visto una crescente attenzione alla ricostruzione 3D a scala urbana. Tra i vari strumenti oggi a disposizione spicca VisualSfM sviluppato dall’università di Washingthon e Google. Si tratta di una Interfaccia grafica open source strutturata in algoritmi dedicati alla tecnica di Structure from Motion (SfM. VisualSfM utilizza un estrattore di features chiamato SIFTGPU e un algoritmo di Bundle Adjustment Multicore. Inoltre è possibile ottenere una nuvola di punti densa utilizzando gli algoritmi CMVS/PMVS2. La finalità di questo studio è di verificare l’accuratezza metrica delle ricostruzioni attraverso l’utilizzo integrato di VisualSfM e CMVS/PMVS2. L’approccio quindi è stato testato su diversi dataset di una certa entità strutturati da collezioni fotografiche ragionate. 

  16. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  17. Eyes on the Earth 3D

    Science.gov (United States)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  18. Matlab script for 3D visualizing geodata on a rotating globe

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef

    2013-01-01

    Roč. 56, July (2013), s. 127-130 ISSN 0098-3004 Institutional support: RVO:67985815 Keywords : 3D visualization * geoid height * elevation mode Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.562, year: 2013

  19. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    Science.gov (United States)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  20. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  1. Interactive 3d Landscapes on Line

    Science.gov (United States)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  2. INTERACTIVE 3D LANDSCAPES ON LINE

    Directory of Open Access Journals (Sweden)

    B. Fanini

    2012-09-01

    Full Text Available The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  3. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    International Nuclear Information System (INIS)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2007-01-01

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  4. IGLANCE: interactive free viewpoint for 3D TV

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; Ruijters, D.; With, de P.H.N.

    2010-01-01

    The iGLANCE project aims at making interactive free viewpoint selection possible in 3D TV broadcasted media. This means that the viewer can select and interactively change the viewpoint of a stereoscopic streamed video. The interactivity is enabled by broad-casting a number of video streams from

  5. Interactive 3D segmentation using connected orthogonal contours

    NARCIS (Netherlands)

    de Bruin, P. W.; Dercksen, V. J.; Post, F. H.; Vossepoel, A. M.; Streekstra, G. J.; Vos, F. M.

    2005-01-01

    This paper describes a new method for interactive segmentation that is based on cross-sectional design and 3D modelling. The method represents a 3D model by a set of connected contours that are planar and orthogonal. Planar contours overlayed on image data are easily manipulated and linked contours

  6. STRING 3: An Advanced Groundwater Flow Visualization Tool

    Science.gov (United States)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822

  7. Integration of Notification with 3D Visualization of Rover Operations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D visualization has proven effective at orienting remote ground controllers about robots operating on a planetary surface. Using such displays, controllers can...

  8. Arena3D: visualizing time-driven phenotypic differences in biological systems

    Directory of Open Access Journals (Sweden)

    Secrier Maria

    2012-03-01

    Full Text Available Abstract Background Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes. Results Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene lsm14a in cytokinesis is suggested. We also show how phenotypic

  9. Does 3D produce more symptoms of visually induced motion sickness?

    Science.gov (United States)

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  10. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    Science.gov (United States)

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  11. Development and application of visual support module for remote operator in 3D virtual environment

    International Nuclear Information System (INIS)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo; Bae, Chang Hyun

    2006-02-01

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module

  12. Development and application of visual support module for remote operator in 3D virtual environment

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo [Cheju Nat. Univ., Jeju (Korea, Republic of); Bae, Chang Hyun [Pusan Nat. Univ., Busan (Korea, Republic of)

    2006-02-15

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module.

  13. 3D modeling and visualization software for complex geometries

    International Nuclear Information System (INIS)

    Guse, Guenter; Klotzbuecher, Michael; Mohr, Friedrich

    2011-01-01

    The reactor safety depends on reliable nondestructive testing of reactor components. For 100% detection probability of flaws and the determination of their size using ultrasonic methods the ultrasonic waves have to hit the flaws within a specific incidence and squint angle. For complex test geometries like testing of nozzle welds from the outside of the component these angular ranges can only be determined using elaborate mathematical calculations. The authors developed a 3D modeling and visualization software tool that allows to integrate and present ultrasonic measuring data into the 3D geometry. The software package was verified using 1:1 test samples (example: testing of the nozzle edge of the feedwater nozzle of a steam generator from the outside; testing of the reactor pressure vessel nozzle edge from the inside).

  14. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito [Nagoya University, Graduate School of Information Science, Nagoya (Japan); Yamada, Shohzoh; Naitoh, Munetaka [Aichi-Gakuin University, School of Dentistry, Nagoya (Japan)

    2007-06-15

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  15. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    Science.gov (United States)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  16. The Investigation on Using Unity3D Game Engine in Urban Design Study

    Directory of Open Access Journals (Sweden)

    Aswin Indraprastha

    2009-05-01

    Full Text Available Developing a virtual 3D environment by using game engine is a strategy to incorporate various multimedia data into one platform. The characteristic of game engine that is preinstalled with interactive and navigation tools allows users to explore and engage with the game objects. However, most CAD and GIS applications are not equipped with 3D tools and navigation systems intended to the user experience. In particular, 3D game engines provide standard 3D navigation tools as well as any programmable view to create engaging navigation thorough the virtual environment. By using a game engine, it is possible to create other interaction such as object manipulation, non playing character (NPC interaction with player and/or environment. We conducted analysis on previous game engines and experiment on urban design project with Unity3D game engine for visualization and interactivity. At the end, we present the advantages and limitations using game technology as visual representation tool for architecture and urban design studies.

  17. Revealing Social Values by 3D City Visualization in City Transformations

    Directory of Open Access Journals (Sweden)

    Tim Johansson

    2016-02-01

    Full Text Available Social sustainability is a widely used concept in urban planning research and practice. However, knowledge of spatial distributions of social values and aspects of social sustainability is required. Visualization of these distributions is also highly valuable, but challenging, and rarely attempted in sparsely populated urban environments in rural areas. This article presents a method that highlights social values in spatial models through 3D visualization, describes the methodology to generate the models, and discusses potential applications. The models were created using survey, building, infrastructure and demographic data for Gällivare, Sweden, a small city facing major transformation due to mining subsidence. It provides an example of how 3D models of important social sustainability indices can be designed to display citizens’ attitudes regarding their financial status, the built environment, social inclusion and welfare services. The models helped identify spatial variations in perceptions of the built environment that correlate (inter alia with closeness to certain locations, gender and distances to public buildings. Potential uses of the model for supporting efforts by practitioners, researchers and citizens to visualize and understand social values in similar urban environments are discussed, together with ethical issues (particularly regarding degrees of anonymity concerning its wider use for inclusive planning.

  18. Interactive Processing and Visualization of Image Data forBiomedical and Life Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Staadt, Oliver G.; Natarjan, Vijay; Weber, Gunther H.; Wiley,David F.; Hamann, Bernd

    2007-02-01

    Background: Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations. Results: We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data. Conclusion: We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.

  19. Towards Online Visualization and Interactive Monitoring of Real-Time CFD Simulations on Commodity Hardware

    Directory of Open Access Journals (Sweden)

    Nils Koliha

    2015-09-01

    Full Text Available Real-time rendering in the realm of computational fluid dynamics (CFD in particular and scientific high performance computing (HPC in general is a comparably young field of research, as the complexity of most problems with practical relevance is too high for a real-time numerical simulation. However, recent advances in HPC and the development of very efficient numerical techniques allow running first optimized numerical simulations in or near real-time, which in return requires integrated and optimized visualization techniques that do not affect performance. In this contribution, we present concepts, implementation details and several application examples of a minimally-invasive, efficient visualization tool for the interactive monitoring of 2D and 3D turbulent flow simulations on commodity hardware. The numerical simulations are conducted with ELBE, an efficient lattice Boltzmann environment based on NVIDIA CUDA (Compute Unified Device Architecture, which provides optimized numerical kernels for 2D and 3D computational fluid dynamics with fluid-structure interactions and turbulence.

  20. 3D, parallel fluid-structure interaction code

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2011-01-01

    Full Text Available The authors describe the development of a 3D parallel Fluid–Structure–Interaction (FSI) solver and its application to benchmark problems. Fluid and solid domains are discretised using and edge-based finite-volume scheme for efficient parallel...

  1. High-Resolution Multibeam Sonar Survey and Interactive 3-D Exploration of the D-Day Wrecks off Normandy

    Science.gov (United States)

    Mayer, L. A.; Calder, B.; Schmidt, J. S.

    2003-12-01

    Historically, archaeological investigations use sidescan sonar and marine magnetometers as initial search tools. Targets are then examined through direct observation by divers, video, or photographs. Magnetometers can demonstrate the presence, absence, and relative susceptibility of ferrous objects but provide little indication of the nature of the target. Sidescan sonar can present a clear image of the overall nature of a target and its surrounding environment, but the sidescan image is often distorted and contains little information about the true 3-D shape of the object. Optical techniques allow precise identification of objects but suffer from very limited range, even in the best of situations. Modern high-resolution multibeam sonar offers an opportunity to cover a relatively large area from a safe distance above the target, while resolving the true three-dimensional (3-D) shape of the object with centimeter-level resolution. The combination of 3-D mapping and interactive 3-D visualization techniques provides a powerful new means to explore underwater artifacts. A clear demonstration of the applicability of high-resolution multibeam sonar to wreck and artifact investigations occurred when the Naval Historical Center (NHC), the Center for Coastal and Ocean Mapping (CCOM) at the University of New Hampshire, and Reson Inc., collaborated to explore the state of preservation and impact on the surrounding environment of a series of wrecks located off the coast of Normandy, France, adjacent to the American landing sectors The survey augmented previously collected magnetometer and high-resolution sidescan sonar data using a Reson 8125 high-resolution focused multibeam sonar with 240, 0.5° (at nadir) beams distributed over a 120° swath. The team investigated 21 areas in water depths ranging from about three -to 30 meters (m); some areas contained individual targets such as landing craft, barges, a destroyer, troop carrier, etc., while others contained multiple smaller

  2. i3Drive, a 3D interactive driving simulator.

    Science.gov (United States)

    Ambroz, Miha; Prebil, Ivan

    2010-01-01

    i3Drive, a wheeled-vehicle simulator, can accurately simulate vehicles of various configurations with up to eight wheels in real time on a desktop PC. It presents the vehicle dynamics as an interactive animation in a virtual 3D environment. The application is fully GUI-controlled, giving users an easy overview of the simulation parameters and letting them adjust those parameters interactively. It models all relevant vehicle systems, including the mechanical models of the suspension, power train, and braking and steering systems. The simulation results generally correspond well with actual measurements, making the system useful for studying vehicle performance in various driving scenarios. i3Drive is thus a worthy complement to other, more complex tools for vehicle-dynamics simulation and analysis.

  3. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    Science.gov (United States)

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  4. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering

    International Nuclear Information System (INIS)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L.

    2006-01-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [de

  5. 3D computer visualization and animation of CANDU reactor core

    International Nuclear Information System (INIS)

    Qian, T.; Echlin, M.; Tonner, P.; Sur, B.

    1999-01-01

    Three-dimensional (3D) computer visualization and animation models of typical CANDU reactor cores (Darlington, Point Lepreau) have been developed using world-wide-web (WWW) browser based tools: JavaScript, hyper-text-markup language (HTML) and virtual reality modeling language (VRML). The 3D models provide three-dimensional views of internal control and monitoring structures in the reactor core, such as fuel channels, flux detectors, liquid zone controllers, zone boundaries, shutoff rods, poison injection tubes, ion chambers. Animations have been developed based on real in-core flux detector responses and rod position data from reactor shutdown. The animations show flux changing inside the reactor core with the drop of shutoff rods and/or the injection of liquid poison. The 3D models also provide hypertext links to documents giving specifications and historical data for particular components. Data in HTML format (or other format such as PDF, etc.) can be shown in text, tables, plots, drawings, etc., and further links to other sources of data can also be embedded. This paper summarizes the use of these WWW browser based tools, and describes the resulting 3D reactor core static and dynamic models. Potential applications of the models are discussed. (author)

  6. Visualization of volumetric seismic data

    Science.gov (United States)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  7. Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    NARCIS (Netherlands)

    Vlaming, Luc; Collins, Christopher; Hancock, Mark; Nacenta, Miguel; Isenberg, Tobias; Carpendale, Sheelagh

    2010-01-01

    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch

  8. A Case Study in Astronomical 3D Printing: The Mysterious η Carinae

    Science.gov (United States)

    Madura, Thomas I.

    2017-05-01

    Three-dimensional (3D) printing moves beyond interactive 3D graphics and provides an excellent tool for both visual and tactile learners, since 3D printing can now easily communicate complex geometries and full color information. Some limitations of interactive 3D graphics are also alleviated by 3D printable models, including issues of limited software support, portability, accessibility, and sustainability. We describe the motivations, methods, and results of our work on using 3D printing (1) to visualize and understand the η Car Homunculus nebula and central binary system and (2) for astronomy outreach and education, specifically, with visually impaired students. One new result we present is the ability to 3D print full-color models of η Car’s colliding stellar winds. We also demonstrate how 3D printing has helped us communicate our improved understanding of the detailed structure of η Car’s Homunculus nebula and central binary colliding stellar winds, and their links to each other. Attached to this article are full-color 3D printable files of both a red-blue Homunculus model and the η Car colliding stellar winds at orbital phase 1.045. 3D printing could prove to be vital to how astronomer’s reach out and share their work with each other, the public, and new audiences.

  9. Examining the Conceptual Understandings of Geoscience Concepts of Students with Visual Impairments: Implications of 3-D Printing

    Science.gov (United States)

    Koehler, Karen E.

    with fragments. Most of the participants in the study increased their scientific understandings of plate tectonics and other geoscience concepts and held more scientific understandings after instruction than before instruction. All students had misconceptions before the instructional period began, but the number of misconceptions were fewer after the instructional period. Students in the TG group not only had fewer misconceptions than the 3D group before instruction, but also after instruction. Many of the student misconceptions were similar to those held by students with typical vision; however, some were unique to students with visual impairments. One unique aspect of this study was the examination of student mental models, which had not previously been done with students with visual impairments, but is more commonplace in research on students with typical vision. Student mental models were often descriptive rather than explanatory, often incorporating scientific language, but not clearly showing that the student had a complete grasp of the concept. Consistent with prior research, the use of 3-D printed models instead of tactile graphics seemed to make little difference either positively or negatively on student conceptual understanding; however, the participants did interact with the 3-D printed models differently, sometimes gleaning additional information from them. This study also provides additional support for inquiry-based instruction as an effective means of science instruction for students with visual impairments.

  10. Analytical calculation of magnet interactions in 3D

    OpenAIRE

    Yonnet , Jean-Paul; Allag , Hicham

    2009-01-01

    International audience; A synthesis of all the analytical expressions of the interaction energy, force components and torque components is presented. It allows the analytical calculation of all the interactions when the magnetizations are in any direction. The 3D analytical expressions are difficult to obtain, but the torque and force expressions are very simple to use.

  11. Suitability of online 3D visualization technique in oil palm plantation management

    Science.gov (United States)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  12. Visualizing measurement for 3D smooth density distributions by means of linear programming

    International Nuclear Information System (INIS)

    Tayama, Norio; Yang, Xue-dong

    1994-01-01

    This paper is concerned with a theoretical possibility of a new visualizing measurement method based on an optimum 3D reconstruction from a few selected projections. A theory of optimum 3D reconstruction by a linear programming is discussed, utilizing a few projections for sampled 3D smooth-density-distribution model which satisfies the condition of the 3D sampling theorem. First by use of the sampling theorem, it is shown that we can set up simultaneous simple equations which corresponds to the case of the parallel beams. Then we solve the simultaneous simple equations by means of linear programming algorithm, and we can get an optimum 3D density distribution images with minimum error in the reconstruction. The results of computer simulation with the algorithm are presented. (author)

  13. Review of Dactyl: an Interactive 3D Osteology App [iPad

    Directory of Open Access Journals (Sweden)

    Reviewed by Alison Atkin

    2015-01-01

    Full Text Available The study of human osteology has always relied on access to real skeletal remains from collections for teaching, learning, and reference. it has long been supplemented by representational and replica materials, some two-dimensional, such as illustrations and photographs, and others three-dimensional, such as casts and, more recently, 3D digital objects. All are important aids in the study of osteology; however, some of the most exciting advances have recently been made in interactive 3D digital osteology objects. In recent years with the adoption of 3D scanning technology in osteology, such as CT, MRI, laser, and structured light scanners, these 3D digital objects are no longer limited to computer-generated representations, and now include replicas of real human skeletal remains. These can be presented with photo-realistic colours, dynamic textures, and detailed features, and as such they represent a major shift forward in interactive 3D digital osteology. Dactyl, the focus of this review, is the newest addition to this area and is the first to incorporate all of the key advances into one resource: an app that uses tactile response to interact with high-quality replica 3D digital objects.

  14. Visualization tool for three-dimensional plasma velocity distributions (ISEE_3D) as a plug-in for SPEDAS

    Science.gov (United States)

    Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron

    2017-12-01

    This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.

  15. The 3d8-(3d74p + 3p53d9) transitions in Br X: A striking case of configuration interaction

    International Nuclear Information System (INIS)

    Kleef, T.A.M. van; Uylings, P.H.M.; Ryabtsev, A.N.; Podobedova, L.I.; Joshi, Y.N.

    1988-01-01

    The spectrum of nine times ionized bromine (Br X) was photographed in the 90-120 A wavelength region on a variety of grazing incidence spectrographs using an open spark and a triggered spark as light sources. The analysis of the 3d 8 -(3d 7 4p + 3p 5 3d 9 ) transitions has resulted in establishing all 9 levels of the 3d 8 configuration, all 12 levels of the 3p 5 3d 9 configuration and 99 out of 110 levels of the 3d 7 4p configuration. The excitation probability of the 3p inner-shell electron increases with nuclear charge and in Br X is comparable with the excitation probability of the optical electrons resulting in a very strong configuration interaction between the 3p 5 3d 9 and 3d 7 4p configurations. Parametric calculations treating these configurations as one super configuration support the analysis. Two hundred and thirty two lines have been classified in this spectrum. (orig.)

  16. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Science.gov (United States)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  17. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Directory of Open Access Journals (Sweden)

    S. Gonizzi Barsanti

    2015-08-01

    Full Text Available Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  18. An amalgamation of 3D city models in urban air quality modelling for improving visual impact analysis

    DEFF Research Database (Denmark)

    Ujang, U.; Anton, F.; Ariffin, A.

    2015-01-01

    is predominantly vehicular engines, the situation will become worse when pollutants are trapped between buildings and disperse inside the street canyon and move vertically to create a recirculation vortex. Studying and visualizing the recirculation zone in 3D visualization is conceivable by using 3D city models......,engineers and policy makers to design the street geometry (building height and width, green areas, pedestrian walks, roads width, etc.)....

  19. 3D visualization of numeric planetary data using JMARS

    Science.gov (United States)

    Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.

    2013-12-01

    JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.

  20. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    Science.gov (United States)

    2013-06-28

    accurate tracking and identity associations of people’s motions in videos. Proxemics is a subfield of anthropology that involves study of people...cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium

  1. Configurable Input Devices for 3D Interaction using Optical Tracking

    NARCIS (Netherlands)

    A.J. van Rhijn (Arjen)

    2007-01-01

    textabstractThree-dimensional interaction with virtual objects is one of the aspects that needs to be addressed in order to increase the usability and usefulness of virtual reality. Human beings have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces, which

  2. Configurable input devices for 3D interaction using optical tracking

    NARCIS (Netherlands)

    Rhijn, van A.J.

    2007-01-01

    Three-dimensional interaction with virtual objects is one of the aspects that needs to be addressed in order to increase the usability and usefulness of virtual reality. Human beings have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces, which require the

  3. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    Science.gov (United States)

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  4. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  5. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  6. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  7. 3D visualization reduces operating time when compared to high-definition 2D in laparoscopic liver resection: a case-matched study.

    Science.gov (United States)

    Velayutham, Vimalraj; Fuks, David; Nomi, Takeo; Kawaguchi, Yoshikuni; Gayet, Brice

    2016-01-01

    To evaluate the effect of three-dimensional (3D) visualization on operative performance during elective laparoscopic liver resection (LLR). Major limitations of conventional laparoscopy are lack of depth perception and tactile feedback. Introduction of robotic technology, which employs 3D imaging, has removed only one of these technical obstacles. Despite the significant advantages claimed, 3D systems have not been widely accepted. In this single institutional study, 20 patients undergoing LLR by high-definition 3D laparoscope between April 2014 and August 2014 were matched to a retrospective control group of patients who underwent LLR by two-dimensional (2D) laparoscope. The number of patients who underwent major liver resection was 5 (25%) in the 3D group and 10 (25%) in the 2D group. There was no significant difference in contralateral wedge resection or combined resections between the 3D and 2D groups. There was no difference in the proportion of patients undergoing previous abdominal surgery (70 vs. 77%, p = 0.523) or previous hepatectomy (20 vs. 27.5%, p = 0.75). The operative time was significantly shorter in the 3D group when compared to 2D (225 ± 109 vs. 284 ± 71 min, p = 0.03). There was no significant difference in blood loss in the 3D group when compared to 2D group (204 ± 226 in 3D vs. 252 ± 349 ml in 2D group, p = 0.291). The major complication rates were similar, 5% (1/20) and 7.5% (3/40), respectively, (p ≥ 0.99). 3D visualization may reduce the operating time compared to high-definition 2D. Further large studies, preferably prospective randomized control trials are required to confirm this.

  8. Evaluating the Cognitive Aspects of User Interaction with 2D Visual Tagging Systems

    Directory of Open Access Journals (Sweden)

    Samuel Olugbenga King

    2008-04-01

    Full Text Available There has been significant interest in thedevelopment and deployment of visual taggingapplications in recent times. But user perceptions aboutthe purpose and function of visual tagging systems havenot received much attention. This paper presents a userexperience study that investigates the cognitive modelsthat novice users have about interacting with visualtagging applications. The results of the study show thatalthough most users are unfamiliar with visual taggingtechnologies, they could accurately predict the purposeand mode of retrieval of data stored in visual tags. Thestudy concludes with suggestions on how to improve therecognition, ease of recall and design of visual tags.

  9. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    Science.gov (United States)

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  10. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    Science.gov (United States)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  11. A 3D visualization of spatial relationship between geological structure and groundwater chemical profile around Iwate volcano, Japan: based on the ARCGIS 3D Analyst

    Science.gov (United States)

    Shibahara, A.; Ohwada, M.; Itoh, J.; Kazahaya, K.; Tsukamoto, H.; Takahashi, M.; Morikawa, N.; Takahashi, H.; Yasuhara, M.; Inamura, A.; Oyama, Y.

    2009-12-01

    We established 3D geological and hydrological model around Iwate volcano to visualize 3D relationships between subsurface structure and groundwater profile. Iwate volcano is a typical polygenetic volcano located in NE Japan, and its body is composed of two stratovolcanoes which have experienced sector collapses several times. Because of this complex structure, groundwater flow around Iwate volcano is strongly restricted by subsurface construction. For example, Kazahaya and Yasuhara (1999) clarified that shallow groundwater in north and east flanks of Iwate volcano are recharged at the mountaintop, and these flow systems are restricted in north and east area because of the structure of younger volcanic body collapse. In addition, Ohwada et al. (2006) found that these shallow groundwater in north and east flanks have relatively high concentration of major chemical components and high 3He/4He ratios. In this study, we succeeded to visualize the spatial relationship between subsurface structure and chemical profile of shallow and deep groundwater system using 3D model on the GIS. In the study region, a number of geological and hydrological datasets, such as boring log data and groundwater chemical profile, were reported. All these paper data are digitized and converted to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. We also inputted digital elevation model (DEM) around Iwate volcano issued by the Geographical Survey Institute of Japan, and digital geological maps issued by Geological Survey of Japan, AIST. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer.

  12. Educational Material for 3D Visualization of Spine Procedures: Methods for Creation and Dissemination.

    Science.gov (United States)

    Cramer, Justin; Quigley, Edward; Hutchins, Troy; Shah, Lubdha

    2017-06-01

    Spine anatomy can be difficult to master and is essential for performing spine procedures. We sought to utilize the rapidly expanding field of 3D technology to create freely available, interactive educational materials for spine procedures. Our secondary goal was to convey lessons learned about 3D modeling and printing. This project involved two parallel processes: the creation of 3D-printed physical models and interactive digital models. We segmented illustrative CT studies of the lumbar and cervical spine to create 3D models and then printed them using a consumer 3D printer and a professional 3D printing service. We also included downloadable versions of the models in an interactive eBook and platform-independent web viewer. We then provided these educational materials to residents with a pretest and posttest to assess efficacy. The "Spine Procedures in 3D" eBook has been downloaded 71 times as of October 5, 2016. All models used in the book are available for download and printing. Regarding test results, the mean exam score improved from 70 to 86%, with the most dramatic improvement seen in the least experienced trainees. Participants reported increased confidence in performing lumbar punctures after exposure to the material. We demonstrate the value of 3D models, both digital and printed, in learning spine procedures. Moreover, 3D printing and modeling is a rapidly expanding field with a large potential role for radiologists. We have detailed our process for creating and sharing 3D educational materials in the hopes of motivating and enabling similar projects.

  13. A multimodal virtual reality interface for 3D interaction with VTK

    NARCIS (Netherlands)

    Kok, A.J.F.; Liere, van R.

    2007-01-01

    The object-oriented visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides a large number of functions for presenting three-dimensional data. Interaction with the visualized data is controlled with two-dimensional input devices, such as

  14. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo

    2015-01-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches

  15. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo, E-mail: kes7741@snu.ac.kr

    2015-04-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches.

  16. Novel interactive virtual showcase based on 3D multitouch technology

    Science.gov (United States)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  17. Cyclin D3 interacts with human activating transcription factor 5 and potentiates its transcription activity

    International Nuclear Information System (INIS)

    Liu Wenjin; Sun Maoyun; Jiang Jianhai; Shen Xiaoyun; Sun Qing; Liu Weicheng; Shen Hailian; Gu Jianxin

    2004-01-01

    The Cyclin D3 protein is a member of the D-type cyclins. Besides serving as cell cycle regulators, D-type cyclins have been reported to be able to interact with several transcription factors and modulate their transcriptional activations. Here we report that human activating transcription factor 5 (hATF5) is a new interacting partner of Cyclin D3. The interaction was confirmed by in vivo coimmunoprecipitation and in vitro binding analysis. Neither interaction between Cyclin D1 and hATF5 nor interaction between Cyclin D2 and hATF5 was observed. Confocal microscopy analysis showed that Cyclin D3 could colocalize with hATF5 in the nuclear region. Cyclin D3 could potentiate hATF5 transcriptional activity independently of its Cdk4 partner. But Cyclin D1 and Cyclin D2 had no effect on hATF5 transcriptional activity. These data provide a new clue to understand the new role of Cyclin D3 as a transcriptional regulator

  18. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    Science.gov (United States)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  19. Simulating 3D deformation using connected polygons

    Science.gov (United States)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  20. COGNITIVE ASPECTS OF COLLABORATION IN 3D VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    V. Juřík

    2016-06-01

    Full Text Available Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators’ actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators’ responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators’ strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  1. Cognitive Aspects of Collaboration in 3d Virtual Environments

    Science.gov (United States)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  2. 3D gaze tracking system for NVidia 3D Vision®.

    Science.gov (United States)

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  3. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    Science.gov (United States)

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  4. Use of 2.5-D and 3-D technology to evaluate control room upgrades

    International Nuclear Information System (INIS)

    Hanes, L. F.; Naser, J.

    2006-01-01

    This paper describes an Electric Power Research Inst. (EPRI) study in which 2.5-D and 3-D visualization technology was applied to evaluate the design of a nuclear power plant control room upgrade. The study involved converting 3-D CAD flies of a planned upgrade into a photo-realistic appearing virtual model, and evaluating the value and usefulness of the model. Nuclear utility and EPRI evaluators viewed and interacted with the control room virtual model with both 2.5-D and 3-D representations. They identified how control room and similar virtual models may be used by utilities for design and evaluation purposes; assessed potential economic and other benefits; and identified limitations, potential problems, and other issues regarding use of visualization technology for this and similar applications. In addition, the Halden CREATE (Control Room Engineering Advanced Tool-kit Environment) Verification Tool was applied to evaluate features of the virtual model against US NRC NUREG 0700 Revision 2 human factors engineering guidelines (NUREG 0700) [1]. The study results are very favorable for applying 2.5-D visualization technology to support upgrading nuclear power plant control rooms and other plant facilities. Results, however, show that today's 3-D immersive viewing systems are difficult to justify based on cost, availability and value of information provided for this application. (authors)

  5. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser.

    Science.gov (United States)

    Müller, R Dietmar; Qin, Xiaodong; Sandwell, David T; Dutkiewicz, Adriana; Williams, Simon E; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2016-01-01

    The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org) is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry.

  6. BioCichlid: central dogma-based 3D visualization system of time-course microarray data on a hierarchical biological network.

    Science.gov (United States)

    Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi

    2009-02-15

    BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.

  7. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    Science.gov (United States)

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  8. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  9. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Directory of Open Access Journals (Sweden)

    Akitoshi Ogawa

    Full Text Available The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion. Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround, 3D with monaural sound (3D-Mono, 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG. The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life

  10. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Science.gov (United States)

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  11. On the future of 3-D visualization in non-medical industrial x-ray computed tomography

    International Nuclear Information System (INIS)

    Wells, J.M.

    2004-01-01

    The purpose of imaging is to capture and record the details of an object for both current and future analysis in a transportable and archival format. Generally, the development and understanding of the relationships of the features of interest thus revealed in the image is ultimately essential for the beneficial utilization of that that knowledge. Modern advanced imaging methods utilized in both medical and industrial applications are predominantly of a digital format, and increasingly moving from a 2-D to 3-D modality to allow for significantly improved detail resolution and clarity of volumetric visualization. Conventional digital radiography (DR), for example, compresses an entire object volume onto a 2-D planar image with consequent lack of spatial resolution and considerable loss of small volume feature resolution. Computed tomography (CT) overcomes both of these limitations, providing the highly desirable capability of precise 3-D detection, localization and characterization of multiple features throughout the subject object volume. CT has the further capability to reconstruct virtual 3-D solid object images with arbitrary and reversible planar sectioning and of variable transparency to clearly visualize features of different densities in situ within an otherwise opaque object. While tomographic imaging is utilized in various medical CT, MRI, PET, EBCT and 3-D Ultrasound modalities, only the X-ray CT imaging is briefly discussed here as it presents comparable high quality images and is quite similar and synergistic with industrial XCT. Medical CT procedures started in the late 1970's (originally known as CAT Scan) and have progressed to the extent of being experienced and accepted by much of the general population. Non-Medical CT (or Industrial XCT) technology has historically followed in the shadow of Medical CT but remains today considerably less pervasive. There are however increasingly several important equipment and application distinctions. These will

  12. Cytoscape tools for the web age: D3.js and Cytoscape.js exporters.

    Science.gov (United States)

    Ono, Keiichiro; Demchak, Barry; Ideker, Trey

    2014-01-01

    In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them.

  13. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Directory of Open Access Journals (Sweden)

    Jeff A Tracey

    Full Text Available Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  14. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    Science.gov (United States)

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  15. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Science.gov (United States)

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  16. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  17. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    Science.gov (United States)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  18. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  19. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    Science.gov (United States)

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise

  20. Visualization of the variability of 3D statistical shape models by animation.

    Science.gov (United States)

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  1. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  2. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser.

    Directory of Open Access Journals (Sweden)

    R Dietmar Müller

    Full Text Available The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry.

  3. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  4. Interactive exploratory visualization of 2D vector fields

    NARCIS (Netherlands)

    Isenberg, Tobias; Everts, Maarten H.; Grubert, Jens; Carpendale, Sheelagh

    In this paper we present several techniques to interactively explore representations of 2D vector fields. Through a set of simple hand postures used on large, touch-sensitive displays, our approach allows individuals to custom design glyphs (arrows, lines, etc.) that best reveal patterns of the

  5. Fast interactive exploration of 4D MRI flow data

    Science.gov (United States)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing

  6. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  7. 3D Visualization of Urban Area Using Lidar Technology and CityGML

    Science.gov (United States)

    Popovic, Dragana; Govedarica, Miro; Jovanovic, Dusan; Radulovic, Aleksandra; Simeunovic, Vlado

    2017-12-01

    3D models of urban areas have found use in modern world such as navigation, cartography, urban planning visualization, construction, tourism and even in new applications of mobile navigations. With the advancement of technology there are much better solutions for mapping earth’s surface and spatial objects. 3D city model enables exploration, analysis, management tasks and presentation of a city. Urban areas consist of terrain surfaces, buildings, vegetation and other parts of city infrastructure such as city furniture. Nowadays there are a lot of different methods for collecting, processing and publishing 3D models of area of interest. LIDAR technology is one of the most effective methods for collecting data due the large amount data that can be obtained with high density and geometrical accuracy. CityGML is open standard data model for storing alphanumeric and geometry attributes of city. There are 5 levels of display (LoD0, LoD1, LoD2, LoD3, LoD4). In this study, main aim is to represent part of urban area of Novi Sad using LIDAR technology, for data collecting, and different methods for extraction of information’s using CityGML as a standard for 3D representation. By using series of programs, it is possible to process collected data, transform it to CityGML and store it in spatial database. Final product is CityGML 3D model which can display textures and colours in order to give a better insight of the cities. This paper shows results of the first three levels of display. They consist of digital terrain model and buildings with differentiated rooftops and differentiated boundary surfaces. Complete model gives us a realistic view of 3D objects.

  8. Interactive Visualization of Healthcare Data Using Tableau.

    Science.gov (United States)

    Ko, Inseok; Chang, Hyejung

    2017-10-01

    Big data analysis is receiving increasing attention in many industries, including healthcare. Visualization plays an important role not only in intuitively showing the results of data analysis but also in the whole process of collecting, cleaning, analyzing, and sharing data. This paper presents a procedure for the interactive visualization and analysis of healthcare data using Tableau as a business intelligence tool. Starting with installation of the Tableau Desktop Personal version 10.3, this paper describes the process of understanding and visualizing healthcare data using an example. The example data of colon cancer patients were obtained from health insurance claims in years 2012 and 2013, provided by the Health Insurance Review and Assessment Service. To explore the visualization of healthcare data using Tableau for beginners, this paper describes the creation of a simple view for the average length of stay of colon cancer patients. Since Tableau provides various visualizations and customizations, the level of analysis can be increased with small multiples, view filtering, mark cards, and Tableau charts. Tableau is a software that can help users explore and understand their data by creating interactive visualizations. The software has the advantages that it can be used in conjunction with almost any database, and it is easy to use by dragging and dropping to create an interactive visualization expressing the desired format.

  9. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  10. GEOSPATIAL DATA PROCESSING FOR 3D CITY MODEL GENERATION, MANAGEMENT AND VISUALIZATION

    Directory of Open Access Journals (Sweden)

    I. Toschi

    2017-05-01

    Full Text Available Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA and national mapping agencies (NMA involved in “smart city” applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above – http://seneca.fbk.eu. State-of-the-art processing solutions are investigated in order to (i efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching, (ii derive topologically and geometrically accurate 3D geo-objects (i.e. building models at various levels of detail and (iii link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy and Graz (Austria. Both spatial (i.e. nadir and oblique imagery and non-spatial (i.e. cadastral information and building energy consumptions data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  11. Geospatial Data Processing for 3d City Model Generation, Management and Visualization

    Science.gov (United States)

    Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S.

    2017-05-01

    Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in "smart city" applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above - http://seneca.fbk.eu). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  12. 3D interactive topology optimization on hand-held devices

    DEFF Research Database (Denmark)

    Nobel-Jørgensen, Morten; Aage, Niels; Christiansen, Asger Nyman

    2015-01-01

    This educational paper describes the implementation aspects, user interface design considerations and workflow potential of the recently published TopOpt 3D App. The app solves the standard minimum compliance problem in 3D and allows the user to change design settings interactively at any point...... in time during the optimization. Apart from its educational nature, the app may point towards future ways of performing industrial design. Instead of the usual geometrize, then model and optimize approach, the geometry now automatically adapts to the varying boundary and loading conditions. The app...

  13. multi-grids method development for the 3D modeling of the pellet-sheath mechanical interaction. Laboratory internship

    International Nuclear Information System (INIS)

    Leclerc, Willy

    2010-10-01

    In order to use a better approach to understand fuel behaviour by numerical simulation, Adaptive Mesh Refinement methods could become very useful. Today, the use of classical Finite Element methods do not enable a easy study of the different phenomenon undergone by the fuel. Important nodes number and calculus time are the two first reasons making the simulation hard to use and visualize. Moreover results reliability is difficult to ensure with a such kind of method. Adaptive Mesh Refinement methods advantages are important particularly if we need a quick convergence and a better mesh structure. The purpose of our study will be to push the method forward by explaining his qualities and putting into place 2D and 3D pellet/sheath interaction problem. (author) [fr

  14. Three-dimensional (3D) visualization of reflow porosity and modeling of deformation in Pb-free solder joints

    International Nuclear Information System (INIS)

    Dudek, M.A.; Hunter, L.; Kranz, S.; Williams, J.J.; Lau, S.H.; Chawla, N.

    2010-01-01

    The volume, size, and dispersion of porosity in solder joints are known to affect mechanical performance and reliability. Most of the techniques used to characterize the three-dimensional (3D) nature of these defects are destructive. With the enhancements in high resolution computed tomography (CT), the detection limits of intrinsic microstructures have been significantly improved. Furthermore, the 3D microstructure of the material can be used in finite element models to understand their effect on microscopic deformation. In this paper we describe a technique utilizing high resolution (< 1 μm) X-ray tomography for the three-dimensional (3D) visualization of pores in Sn-3.9Ag-0.7Cu/Cu joints. The characteristics of reflow porosity, including volume fraction and distribution, were investigated for two reflow profiles. The size and distribution of porosity size were visualized in 3D for four different solder joints. In addition, the 3D virtual microstructure was incorporated into a finite element model to quantify the effect of voids on the lap shear behavior of a solder joint. The presence, size, and location of voids significantly increased the severity of strain localization at the solder/copper interface.

  15. Human Lumbar Ligamentum Flavum Anatomy for Epidural Anesthesia: Reviewing a 3D MR-Based Interactive Model and Postmortem Samples.

    Science.gov (United States)

    Reina, Miguel A; Lirk, Philipp; Puigdellívol-Sánchez, Anna; Mavar, Marija; Prats-Galino, Alberto

    2016-03-01

    The ligamentum flavum (LF) forms the anatomic basis for the loss-of-resistance technique essential to the performance of epidural anesthesia. However, the LF presents considerable interindividual variability, including the possibility of midline gaps, which may influence the performance of epidural anesthesia. We devise a method to reconstruct the anatomy of the digitally LF based on magnetic resonance images to clarify the exact limits and edges of LF and its different thickness, depending on the area examined, while avoiding destructive methods, as well as the dissection processes. Anatomic cadaveric cross sections enabled us to visually check the definition of the edges along the entire LF and compare them using 3D image reconstruction methods. Reconstruction was performed in images obtained from 7 patients. Images from 1 patient were used as a basis for the 3D spinal anatomy tool. In parallel, axial cuts, 2 to 3 cm thick, were performed in lumbar spines of 4 frozen cadavers. This technique allowed us to identify the entire ligament and its exact limits, while avoiding alterations resulting from cutting processes or from preparation methods. The LF extended between the laminas of adjacent vertebrae at all vertebral levels of the patients examined, but midline gaps are regularly encountered. These anatomical variants were reproduced in a 3D portable document format. The major anatomical features of the LF were reproduced in the 3D model. Details of its structure and variations of thickness in successive sagittal and axial slides could be visualized. Gaps within LF previously studied in cadavers have been identified in our interactive 3D model, which may help to understand their nature, as well as possible implications for epidural techniques.

  16. 3D visualization of a resistivity data set - an example from a sludge disposal site

    International Nuclear Information System (INIS)

    Bernstone, C.; Dahlin, T.; Jonsson, P.

    1997-01-01

    A relatively large 2D inverted CVES resistivity data set from a waste pond area in southern Sweden was visualized as an animated 3D model using state-of-the-art techniques and tools. The presentation includes a description of the hardware and software used, outline of the case study and examples of scenes from the animation

  17. D3GB: An Interactive Genome Browser for R, Python, and WordPress.

    Science.gov (United States)

    Barrios, David; Prieto, Carlos

    2017-05-01

    Genome browsers are useful not only for showing final results but also for improving analysis protocols, testing data quality, and generating result drafts. Its integration in analysis pipelines allows the optimization of parameters, which leads to better results. New developments that facilitate the creation and utilization of genome browsers could contribute to improving analysis results and supporting the quick visualization of genomic data. D3 Genome Browser is an interactive genome browser that can be easily integrated in analysis protocols and shared on the Web. It is distributed as an R package, a Python module, and a WordPress plugin to facilitate its integration in pipelines and the utilization of platform capabilities. It is compatible with popular data formats such as GenBank, GFF, BED, FASTA, and VCF, and enables the exploration of genomic data with a Web browser.

  18. 3D composite image, 3D MRI, 3D SPECT, hydrocephalus

    International Nuclear Information System (INIS)

    Mito, T.; Shibata, I.; Sugo, N.; Takano, M.; Takahashi, H.

    2002-01-01

    The three-dimensional (3D)SPECT imaging technique we have studied and published for the past several years is an analytical tool that permits visual expression of the cerebral circulation profile in various cerebral diseases. The greatest drawback of SPECT is that the limitation on precision of spacial resolution makes intracranial localization impossible. In 3D SPECT imaging, intracranial volume and morphology may vary with the threshold established. To solve this problem, we have produced complimentarily combined SPECT and helical-CT 3D images by means of general-purpose visualization software for intracranial localization. In hydrocephalus, however, the key subject to be studied is the profile of cerebral circulation around the ventricles of the brain. This suggests that, for displaying the cerebral ventricles in three dimensions, CT is a difficult technique whereas MRI is more useful. For this reason, we attempted to establish the profile of cerebral circulation around the cerebral ventricles by the production of combined 3D images of SPECT and MRI. In patients who had shunt surgery for hydrocephalus, a difference between pre- and postoperative cerebral circulation profiles was assessed by a voxel distribution curve, 3D SPECT images, and combined 3D SPECT and MRI images. As the shunt system in this study, an Orbis-Sigma valve of the automatic cerebrospinal fluid volume adjustment type was used in place of the variable pressure type Medos valve currently in use, because this device requires frequent changes in pressure and a change in pressure may be detected after MRI procedure. The SPECT apparatus used was PRISM3000 of the three-detector type, and 123I-IMP was used as the radionuclide in a dose of 222 MBq. MRI data were collected with an MAGNEXa+2 with a magnetic flux density of 0.5 tesla under the following conditions: field echo; TR 50 msec; TE, 10 msec; flip, 30ueK; 1 NEX; FOV, 23 cm; 1-mm slices; and gapless. 3D images are produced on the workstation TITAN

  19. Comparative case study between D3 and highcharts on lustre data visualization

    Science.gov (United States)

    ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott

    2013-12-01

    One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).

  20. Savage Modeling and Analysis Language (SMAL): Metadata for Tactical Simulations and X3D Visualizations

    National Research Council Canada - National Science Library

    Rauch, Travis M

    2006-01-01

    Visualizing operations environments in three-dimensions is in keeping with the military's drive to increase the speed and accuracy with which warfighters make decisions in the command center and in the field. Three-dimensional (3D...

  1. The (un)usefulness of interactive exploration in building 3D- mental representations.

    NARCIS (Netherlands)

    Meijer, F.; van den Broek, Egon

    The generation of mental representations from visual images is crucial in 3-D object recognition. In two experiments, thirty-six participants were divided into a low, middle, and high visuospatial ability (VSA) group, which was determined by Vandenberg and Kuse's MRT-A test (1978 Perception and

  2. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    Science.gov (United States)

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  3. Generic Space Science Visualization in 2D/3D using SDDAS

    Science.gov (United States)

    Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.

    2017-12-01

    The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.

  4. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    Science.gov (United States)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether

  5. LATIS3D The Gold Standard for Laser-Tissue-Interaction Modeling

    CERN Document Server

    London, R A; Gentile, N A; Kim, B M; Makarewicz, A M; Vincent, L; Yang, Y B

    2000-01-01

    The goal of this LDRD project has been to create LATIS3D--the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications.

  6. LATIS3D: The Gold Standard for Laser-Tissue-Interaction Modeling

    International Nuclear Information System (INIS)

    London, R.A.; Makarewicz, A.M.; Kim, B.M.; Gentile, N.A.; Yang, Y.B.; Brlik, M.; Vincent, L.

    2000-01-01

    The goal of this LDRD project has been to create LATIS3D--the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications

  7. Visualization of cranial nerves I-XII: value of 3D CISS and T2-weighted FSE sequences

    Energy Technology Data Exchange (ETDEWEB)

    Yousry, I.; Camelio, S.; Wiesmann, M.; Brueckmann, H.; Yousry, T.A. [Department of Neuroradiology, Klinikum Grosshadern, Ludwig-Maximilians University, Marchioninistrasse 15, D-81377 Munich (Germany); Schmid, U.D. [Neurosurgical Unit, Klinik im Park, 8000 Zurich (Switzerland); Horsfield, M.A. [Department of Medical Physics, University of Leicester, Leicester LE1 5WW (United Kingdom)

    2000-07-01

    The aim of this study was to evaluate the sensitivity of the three-dimensional constructive interference of steady state (3D CISS) sequence (slice thickness 0.7 mm) and that of the T2-weighted fast spin echo (T2-weighted FSE) sequence (slice thickness 3 mm) for the visualization of all cranial nerves in their cisternal course. Twenty healthy volunteers were examined using the T2-weighted FSE and the 3D CISS sequences. Three observers evaluated independently the cranial nerves NI-NXII in their cisternal course. The rates for successful visualization of each nerve for 3D CISS (and for T2-weighted FSE in parentheses) were as follows: NI, NII, NV, NVII, NVIII 40 of 40 (40 of 40), NIII 40 of 40 (18 of 40), NIV 19 of 40 (3 of 40), NVI 39 of 40 (5 of 40), NIX, X, XI 40 of 40 (29 of 40), and NXII 40 of 40 (4 of 40). Most of the cranial nerves can be reliably assessed when using the 3D CISS and the T2-weighted FSE sequences. Increasing the spatial resolution when using the 3D CISS sequence increases the reliability of the identification of the cranial nerves NIII-NXII. (orig.)

  8. Visualization of cranial nerves I-XII: value of 3D CISS and T2-weighted FSE sequences

    International Nuclear Information System (INIS)

    Yousry, I.; Camelio, S.; Wiesmann, M.; Brueckmann, H.; Yousry, T.A.; Schmid, U.D.; Horsfield, M.A.

    2000-01-01

    The aim of this study was to evaluate the sensitivity of the three-dimensional constructive interference of steady state (3D CISS) sequence (slice thickness 0.7 mm) and that of the T2-weighted fast spin echo (T2-weighted FSE) sequence (slice thickness 3 mm) for the visualization of all cranial nerves in their cisternal course. Twenty healthy volunteers were examined using the T2-weighted FSE and the 3D CISS sequences. Three observers evaluated independently the cranial nerves NI-NXII in their cisternal course. The rates for successful visualization of each nerve for 3D CISS (and for T2-weighted FSE in parentheses) were as follows: NI, NII, NV, NVII, NVIII 40 of 40 (40 of 40), NIII 40 of 40 (18 of 40), NIV 19 of 40 (3 of 40), NVI 39 of 40 (5 of 40), NIX, X, XI 40 of 40 (29 of 40), and NXII 40 of 40 (4 of 40). Most of the cranial nerves can be reliably assessed when using the 3D CISS and the T2-weighted FSE sequences. Increasing the spatial resolution when using the 3D CISS sequence increases the reliability of the identification of the cranial nerves NIII-NXII. (orig.)

  9. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    Science.gov (United States)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  10. Visualization of spatial-temporal data based on 3D virtual scene

    Science.gov (United States)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  11. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  12. Participation and 3D Visualization Tools

    DEFF Research Database (Denmark)

    Mullins, Michael; Jensen, Mikkel Holm; Henriksen, Sune

    2004-01-01

    With a departure point in a workshop held at the VR Media Lab at Aalborg University , this paper deals with aspects of public participation and the use of 3D visualisation tools. The workshop grew from a desire to involve a broad collaboration between the many actors in the city through using new...... perceptions of architectural representation in urban design where 3D visualisation techniques are used. It is the authors? general finding that, while 3D visualisation media have the potential to increase understanding of virtual space for the lay public, as well as for professionals, the lay public require...

  13. Interactive visualization to advance earthquake simulation

    Science.gov (United States)

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  14. 3D Flow visualization in virtual reality

    Science.gov (United States)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  15. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    Science.gov (United States)

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  16. Interactive 3D audio: Enhancing awareness of details in immersive soundscapes?

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Schwartz, Stephen; Larsen, Jan

    2012-01-01

    Spatial audio and the possibility of interacting with the audio environment is thought to increase listeners' attention to details in a soundscape. This work examines if interactive 3D audio enhances listeners' ability to recall details in a soundscape. Nine different soundscapes were constructed...

  17. Dynamic accommodative response to different visual stimuli (2D vs 3D) while watching television and while playing Nintendo 3DS console.

    Science.gov (United States)

    Oliveira, Sílvia; Jorge, Jorge; González-Méijome, José M

    2012-09-01

    The aim of the present study was to compare the accommodative response to the same visual content presented in two dimensions (2D) and stereoscopically in three dimensions (3D) while participants were either watching a television (TV) or Nintendo 3DS console. Twenty-two university students, with a mean age of 20.3 ± 2.0 years (mean ± S.D.), were recruited to participate in the TV experiment and fifteen, with a mean age of 20.1 ± 1.5 years took part in the Nintendo 3DS console study. The accommodative response was measured using a Grand Seiko WAM 5500 autorefractor. In the TV experiment, three conditions were used initially: the film was viewed in 2D mode (TV2D without glasses), the same sequence was watched in 2D whilst shutter-glasses were worn (TV2D with glasses) and the sequence was viewed in 3D mode (TV3D). Measurements were taken for 5 min in each condition, and these sections were sub-divided into ten 30-s segments to examine changes within the film. In addition, the accommodative response to three points of different disparity of one 3D frame was assessed for 30 s. In the Nintendo experiment, two conditions were employed - 2D viewing and stereoscopic 3D viewing. In the TV experiment no statistically significant differences were found between the accommodative response with TV2D without glasses (-0.38 ± 0.32D, mean ± S.D.) and TV3D (-0.37 ± 0.34D). Also, no differences were found between the various segments of the film, or between the accommodative response to different points of one frame (p > 0.05). A significant difference (p = 0.015) was found, however, between the TV2D with (-0.32 ± 0.32D) and without glasses (-0.38 ± 0.32D). In the Nintendo experiment the accommodative responses obtained in modes 2D (-2.57 ± 0.30D) and 3D (-2.49 ± 0.28D) were significantly different (paired t-test p = 0.03). The need to use shutter-glasses may affect the accommodative response during the viewing of displays, and the accommodative response when playing

  18. Visualization of the 3D shape of the articular cartilage of the femoral head from MR images

    International Nuclear Information System (INIS)

    Kubota, Tetsuya; Sato, Yoshinobu; Nakanishi, Katsuyuki

    1999-01-01

    This paper describes methods for visualizing the three-dimensional (3D) cartilage thickness distribution from MR images. Cartilage thickness is one of the most important factors in joint diseases. Although the evaluation of cartilage thickness has received considerable attention from orthopedic surgeons and radiologists, evaluation is usually performed based on visual analysis or measurements obtained using calipers on original MR images. Our aim is to employ computerized quantification of MR images for the evaluation of the cartilage thickness of the femoral head. First, we extract an ROI and interpolate all ROI images by sinc interpolation. Next, we extract cartilage regions from MR images using a 3D multiscale sheet filter. Finally, we reconstruct 3D shapes by summing the extracted cartilage regions. We investigate partial volume effects in this method using synthesized images, and show results for in vitro and in vivo MR images. (author)

  19. Development of tactile floor plan for the blind and the visually impaired by 3D printing technique

    Directory of Open Access Journals (Sweden)

    Raša Urbas

    2016-07-01

    Full Text Available The aim of the research was to produce tactile floor plans for blind and visually impaired people for the use in the museum. For the production of tactile floor plans 3D printing technique was selected among three different techniques. 3D prints were made of white and colored ABS polymer materials. Development of different elements of tactile floor plans, as well as the problems and the solutions during 3D printing, are described in the paper.

  20. Identification of extracellular signal-regulated kinase 3 as a new interaction partner of cyclin D3

    International Nuclear Information System (INIS)

    Sun Maoyun; Wei Yuanyan; Yao Luyang; Xie Jianhui; Chen Xiaoning; Wang Hanzhou; Jiang Jianhai; Gu Jianxin

    2006-01-01

    Cyclin D3, like cyclin D1 and D2 isoforms, is a crucial component of the core cell cycle machinery in mammalian cells. It also exhibits its unique properties in many other physiological processes. In the present study, using yeast two-hybrid screening, we identified ERK3, an atypical mitogen-activated protein kinase (MAPK), as a cyclin D3 binding partner. GST pull-down assays showed that cyclin D3 interacts directly and specifically with ERK3 in vitro. The binding of cyclin D3 and ERK3 was further confirmed in vivo by co-immunoprecipitation assay and confocal microscopic analysis. Moreover, carboxy-terminal extension of ERK3 was responsible for its association with intact cyclin D3. These findings further expand distinct roles of cyclin D3 and suggest the potential activity of ERK3 in cell proliferation

  1. An Innovative Direct-Interaction-Enabled Augmented-Reality 3D System

    Directory of Open Access Journals (Sweden)

    Sheng-Hsiung Chang

    2013-01-01

    Full Text Available Previous augmented-reality (AR applications have required users to observe the integration of real and virtual images on a display. This study proposes a novel concept regarding AR applications. By integrating AR techniques with marker identification, virtual-image output, imaging, and image-interaction processes, this study rendered virtual images that can interact with predefined markers in a real three-dimensional (3D environment.

  2. 3D visualization software to analyze topological outcomes of topoisomerase reactions

    Science.gov (United States)

    Darcy, I. K.; Scharein, R. G.; Stasiak, A.

    2008-01-01

    The action of various DNA topoisomerases frequently results in characteristic changes in DNA topology. Important information for understanding mechanistic details of action of these topoisomerases can be provided by investigating the knot types resulting from topoisomerase action on circular DNA forming a particular knot type. Depending on the topological bias of a given topoisomerase reaction, one observes different subsets of knotted products. To establish the character of topological bias, one needs to be aware of all possible topological outcomes of intersegmental passages occurring within a given knot type. However, it is not trivial to systematically enumerate topological outcomes of strand passage from a given knot type. We present here a 3D visualization software (TopoICE-X in KnotPlot) that incorporates topological analysis methods in order to visualize, for example, knots that can be obtained from a given knot by one intersegmental passage. The software has several other options for the topological analysis of mechanisms of action of various topoisomerases. PMID:18440983

  3. Development of 3D CAD system as a design tool for PEACER development

    International Nuclear Information System (INIS)

    Lee, H. W.; Jung, K. J.; Jung, S. H.; Hwang, I. S.

    2003-01-01

    In an effort to resolve generic concerns with current power reactors, PEACER[1] has been developed as a proliferation-resistant waste transmutation reactor based on a unique combination of technologies of a prove a fast reactor and the heavy liquid metal coolant. In order to develop engineering design and visualize its performance, a three dimensional computer aided design (3D CAD) method has been devised. Based on conceptual design, system, structure and components of PEACER are defined. Using resuIts from finite element stress analyzer, computational fluid dynamics tool, nuclear analysis tool, etc, 3D visualization is achieved on the geometric construct based on CATIA[3]. A 3D visualization environment is utilized not only to overcome the integration complexity but also to manipulate data flow such as meshing information used in analysis codes. The 3D CAD system in this paper includes an open language, Virtual Reality Modeling Language (VRML)[4,5], to deliver analyses results on 3D objects, interactively. Such modeling environment is expected to improve the efficiency of designing the conceptual reactor, PEACER, reducing time and cost. ResuIts of 3D design and system performance simulation will be presented

  4. Value of PET/CT 3D visualization of head and neck squamous cell carcinoma extended to mandible.

    Science.gov (United States)

    Lopez, R; Gantet, P; Julian, A; Hitzel, A; Herbault-Barres, B; Alshehri, S; Payoux, P

    2018-05-01

    To study an original 3D visualization of head and neck squamous cell carcinoma extending to the mandible by using [18F]-NaF PET/CT and [18F]-FDG PET/CT imaging along with a new innovative FDG and NaF image analysis using dedicated software. The main interest of the 3D evaluation is to have a better visualization of bone extension in such cancers and that could also avoid unsatisfying surgical treatment later on. A prospective study was carried out from November 2016 to September 2017. Twenty patients with head and neck squamous cell carcinoma extending to the mandible (stage 4 in the UICC classification) underwent [18F]-NaF and [18F]-FDG PET/CT. We compared the delineation of 3D quantification obtained with [18F]-NaF and [18F]-FDG PET/CT. In order to carry out this comparison, a method of visualisation and quantification of PET images was developed. This new approach was based on a process of quantification of radioactive activity within the mandibular bone that objectively defined the significant limits of this activity on PET images and on a 3D visualization. Furthermore, the spatial limits obtained by analysis of the PET/CT 3D images were compared to those obtained by histopathological examination of mandibular resection which confirmed intraosseous extension to the mandible. The [18F]-NaF PET/CT imaging confirmed the mandibular extension in 85% of cases and was not shown in [18F]-FDG PET/CT imaging. The [18F]-NaF PET/CT was significantly more accurate than [18F]-FDG PET/CT in 3D assessment of intraosseous extension of head and neck squamous cell carcinoma. This new 3D information shows the importance in the imaging approach of cancers. All cases of mandibular extension suspected on [18F]-NaF PET/CT imaging were confirmed based on histopathological results as a reference. The [18F]-NaF PET/CT 3D visualization should be included in the pre-treatment workups of head and neck cancers. With the use of a dedicated software which enables objective delineation of

  5. P1-1: The Effect of Convergence Training on Visual Discomfort in 3D TV Viewing

    Directory of Open Access Journals (Sweden)

    Hyun Min Jeon

    2012-10-01

    Full Text Available The present study investigated whether convergence training has an effect on reducing visual discomfort in viewing a stereoscopic TV. Participants were assigned into either a training group or a control group. In the training group, one of the two different training procedures is provided: gradual change or random change in the disparities of bar stimulus which was used for convergence training. Training itself was very effective so that convergence fusional range was improved after 3 repeated trainings with intervals of two weeks. In order to evaluate the effect of convergence training on visual discomfort, the visual discomfort in 3D TV viewing was measured before and after training sessions. The results showed that a significant reduction in visual discomfort was found after training only in one training group. These results demonstrated a repeated convergence training might be helpful in reducing the visual discomfort. Further studies should be needed to set the most effective parameters of training of this pattern.

  6. Towards an Integrated Visualization Of Semantically Enriched 3D City Models: An Ontology of 3D Visualization Techniques

    OpenAIRE

    Métral, Claudine; Ghoula, Nizar; Falquet, Gilles

    2012-01-01

    3D city models - which represent in 3 dimensions the geometric elements of a city - are increasingly used for an intended wide range of applications. Such uses are made possible by using semantically enriched 3D city models and by presenting such enriched 3D city models in a way that allows decision-making processes to be carried out from the best choices among sets of objectives, and across issues and scales. In order to help in such a decision-making process we have defined a framework to f...

  7. Comparative evaluation of HD 2D/3D laparoscopic monitors and benchmarking to a theoretically ideal 3D pseudodisplay: even well-experienced laparoscopists perform better with 3D.

    Science.gov (United States)

    Wilhelm, D; Reiser, S; Kohn, N; Witte, M; Leiner, U; Mühlbach, L; Ruschin, D; Reiner, W; Feussner, H

    2014-08-01

    Though theoretically superior to standard 2D visualization, 3D video systems have not yet achieved a breakthrough in laparoscopy. The latest 3D monitors, including autostereoscopic displays and high-definition (HD) resolution, are designed to overcome the existing limitations. We performed a randomized study on 48 individuals with different experience levels in laparoscopy. Three different 3D displays (glasses-based 3D monitor, autostereoscopic display, and a mirror-based theoretically ideal 3D display) were compared to a 2D HD display by assessing multiple performance and mental workload parameters and rating the subjects during a laparoscopic suturing task. Electromagnetic tracking provided information on the instruments' pathlength, movement velocity, and economy. The usability, the perception of visual discomfort, and the quality of image transmission of each monitor were subjectively rated. Almost all performance parameters were superior with the conventional glasses-based 3D display compared to the 2D display and the autostereoscopic display, but were often significantly exceeded by the mirror-based 3D display. Subjects performed a task faster and with greater precision when visualization was achieved with the 3D and the mirror-based display. Instrument pathlength was shortened by improved depth perception. Workload parameters (NASA TLX) did not show significant differences. Test persons complained of impaired vision while using the autostereoscopic monitor. The 3D and 2D displays were rated user-friendly and applicable in daily work. Experienced and inexperienced laparoscopists profited equally from using a 3D display, with an improvement in task performance about 20%. Novel 3D displays improve laparoscopic interventions as a result of faster performance and higher precision without causing a higher mental workload. Therefore, they have the potential to significantly impact the further development of minimally invasive surgery. However, as shown by the

  8. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    Science.gov (United States)

    2017-08-01

    pvOSPRay real- time rendering capability is a crucial component in our workflow. Although Approved for public release; distribution is unlimited. 7...Data Visualization by Simon Su and Luis Bravo Approved for public release; distribution is unlimited. NOTICES...Directorate, ARL by Luis Bravo Vehicle Technology Directorate, ARL Approved for public release; distribution is unlimited. ii

  9. Development of 3D CAD system as a design tool for PEACER development

    International Nuclear Information System (INIS)

    Jeong, Kwang Jin; Lee, Hyoung Won; Jeong, Seung Ho; Shin, Jong Gye; Hwang, Il Soon

    2003-01-01

    In an effort to resolve generic concerns with current power reactors, PEACER has been developed as a proliferation-resistant waste transmutation reactor based on a unique combination of technologies of a proven fast reactor and the heavy liquid metal coolant. In order to develop engineering design and visualize its performance, a three-dimensional computer aided design (3D CAD) method has been devised. Based on conceptual design, system, structure and components of PEACER are defined. Using results from finite element stress analyzer, computational fluid dynamics tool, nuclear analysis tool, etc, 3D visualization is achieved on the geometric construct based on CATIA. A 3D visualization environment is utilized not only to overcome the integration complexity but also to manipulate data flow such as meshing information used in analysis codes. The 3D CAD system in this paper includes an open language, Virtual Reality Modeling Language (VRML), to deliver analyses results on 3D objects, interactively. Such modeling environment is expected to improve the efficiency of designing the conceptual reactor, PEACER, reducing time and cost. Results of 3D design and stress analysis simulation will be presented as an example case. (author)

  10. Using Interactive Visualization to Analyze Solid Earth Data and Geodynamics Models

    Science.gov (United States)

    Kellogg, L. H.; Kreylos, O.; Billen, M. I.; Hamann, B.; Jadamec, M. A.; Rundle, J. B.; van Aalsburg, J.; Yikilmaz, M. B.

    2008-12-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. Major projects such as EarthScope and GeoEarthScope are producing the data needed to characterize the structure and kinematics of Earth's surface and interior at unprecedented resolution. At the same time, high-performance computing enables high-precision and fine- detail simulation of geodynamics processes, complementing the observational data. To facilitate interpretation and analysis of these datasets, to evaluate models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. VR has traditionally been used primarily as a presentation tool allowing active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for accelerated scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. Our approach to VR takes advantage of the specialized skills of geoscientists who are trained to interpret geological and geophysical data generated from field observations. Interactive tools allow the scientist to explore and interpret geodynamic models, tomographic models, and topographic observations, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulations or field observations. The use of VR technology enables us to improve our interpretation of crust and mantle structure and of geodynamical processes. Mapping tools based on computer visualization allow virtual "field studies" in inaccessible regions, and an interactive tool allows us to construct digital fault models for use in numerical models. Using the interactive tools on a high-end platform such as an immersive virtual reality

  11. High-resolution digital 3D models of Algar do Penico Chamber: limitations, challenges, and potential

    Directory of Open Access Journals (Sweden)

    Ivo Silvestre M.Sc.

    2015-01-01

    Full Text Available The study of karst and its geomorphological structures is important for understanding the relationships between hydrology and climate over geological time. In that context, we conducted a terrestrial laser-scan survey to map geomorphological structures in the karst cave of Algar do Penico in southern Portugal. The point cloud data set obtained was used to generate 3D meshes with different levels of detail, allowing the limitations of mapping capabilities to be explored. In addition to cave mapping, the study focuses on 3D-mesh analysis, including the development of two algorithms for determination of stalactite extremities and contour lines, and on the interactive visualization of 3D meshes on the Web. Data processing and analysis were performed using freely available open-source software. For interactive visualization, we adopted a framework based on Web standards X3D, WebGL, and X3DOM. This solution gives both the general public and researchers access to 3D models and to additional data produced from map tools analyses through a web browser, without the need for plug-ins.

  12. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    Science.gov (United States)

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  13. 3D visualization of port simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  14. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  15. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    International Nuclear Information System (INIS)

    Bancroft, G.; Plessel, T.; Merritt, F.; Watson, V.

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers. 7 refs

  16. Thoracic cavity definition for 3D PET/CT analysis and visualization.

    Science.gov (United States)

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W; Higgins, William E

    2015-07-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    Science.gov (United States)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always

  18. Enabling Symmetric Collaboration in Public Spaces through 3D Mobile Interaction

    Directory of Open Access Journals (Sweden)

    Mayra Donaji Barrera Machuca

    2018-03-01

    Full Text Available Collaboration has been common in workplaces in various engineering settings and in our daily activities. However, how to effectively engage collaborators with collaborative tasks has long been an issue due to various situational and technical constraints. The research in this paper addresses the issue in a specific scenario, which is how to enable users to interact with public information from their own perspective. We describe a 3D mobile interaction technique that allows users to collaborate with other people by creating a symmetric and collaborative ambience. This in turn can increase their engagement with public displays. In order to better understand the benefits and limitations of this technique, we conducted a usability study with a total of 40 participants. The results indicate that the 3D mobile interaction technique promotes collaboration between users and also improves their engagement with the public displays.

  19. Starting research in interaction design with visuals for low-functioning children in the autistic spectrum: a protocol.

    Science.gov (United States)

    Parés, Narcís; Carreras, Anna; Durany, Jaume; Ferrer, Jaume; Freixa, Pere; Gómez, David; Kruglanski, Orit; Parés, Roc; Ribas, J Ignasi; Soler, Miquel; Sanjurjo, Alex

    2006-04-01

    On starting to think about interaction design for low-functioning persons in the autistic spectrum (PAS), especially children, one finds a number of questions that are difficult to answer: Can we typify the PAS user? Can we engage the user in interactive communication without generating frustrating or obsessive situations? What sort of visual stimuli can we provide? Will they prefer representational or abstract visual stimuli? Will they understand three-dimensional (3D) graphic representation? What sort of interfaces will they accept? Can we set ambitious goals such as education or therapy? Unfortunately, most of these questions have no answer yet. Hence, we decided to set an apparently simple goal: to design a "fun application," with no intention to reach the level of education or therapy. The goal was to be attained by giving the users a sense of agency--by providing first a sense of control in the interaction dialogue. Our approach to visual stimuli design has been based on the use of geometric, abstract, two-dimensional (2D), real-time computer graphics in a full-body, non-invasive, interactive space. The results obtained within the European-funded project MultiSensory Environment Design for an Interface between Autistic and Typical Expressiveness (MEDIATE) have been extremely encouraging.

  20. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    Science.gov (United States)

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  1. 3D Room Visualization on Android Based Mobile Device (with Philips™’ Surround Sound Music Player

    Directory of Open Access Journals (Sweden)

    Durio Etgar

    2012-12-01

    Full Text Available This project’s specifically purposed as a demo application, so anyone can get the experience of a surround audio room without having to physically involved to it, with a main idea of generating a 3D surround sound room scenery coupled with surround sound in a handier package, namely, a “Virtual Listen Room”. Virtual Listen Room set a foundation of an innovative visualization that later will be developed and released as one of way of portable advertisement. This application was built inside of Android environment. Android device had been chosen as the implementation target, since it leaves massive development spaces and mostly contains essential components needed on this project, including graphic processor unit (GPU.  Graphic manipulation can be done using an embedded programming interface called OpenGL ES, which is planted in all Android devices generally. Further, Android has a Accelerometer Sensor that is needed to be coupled with scene to produce a dynamic movement of the camera. Surround sound effect can be reached with a decoder from Phillips called MPEG Surround Sound Decoder. To sum the whole project, we got an application with sensor-dynamic 3D room visualization coupled with Philips’ Surround Sound Music Player. We can manipulate several room’s properties; Subwoofer location, Room light, and how many speakers inside it, the application itself works well despite facing several performance problems before, later to be solved. [Keywords : Android,Visualization,Open GL; ES; 3D; Surround Sensor

  2. 3D visualization based customer experiences of nuclear plant control room

    International Nuclear Information System (INIS)

    Sun Tienlung; Chou Chinmei; Hung Tamin; Cheng Tsungchieh; Yang Chihwei; Yang Lichen

    2011-01-01

    This paper employs virtual reality (VR) technology to develop an interactive virtual nuclear plant control room in which the general public could easily walk into the 'red zone' and play with the control buttons. The VR-based approach allows deeper and richer customer experiences that the real nuclear plant control room could not offer. When people know more about the serious process control procedures enforced in the nuclear plant control room, they will appropriate more about the safety efforts imposed by the nuclear plant and become more comfortable about the nuclear plant. The virtual nuclear plant control room is built using a 3D game development tool called Unity3D. The 3D scene is connected to a nuclear plant simulation system through Windows API programs. To evaluate the usability of the virtual control room, an experiment will be conducted to see how much 'immersion' the users could feel when they played with the virtual control room. (author)

  3. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  4. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    Science.gov (United States)

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image

  5. Study of 3D visualization of fast active reflector based on openGL and EPICS

    International Nuclear Information System (INIS)

    Luo Mingcheng; Wu Wenqing; Liu Jiajing; Tang Pengyi; Wang Jian

    2014-01-01

    Active Reflector is the one of the innovations of Five hundred meter Aperture Spherical Telescope (FAST). Its performance will influence the performance of whole telescope and for display all status of ARS in real time, the EPICS (Experimental Physics and Industrial Control System) is used to develop the control system of ARS and virtual 3D technology-OpenGL is used to visualize the status. For the real-time performance of EPICS, the status visualization is also display in real time for users to improve the efficiency of telescope observing. (authors)

  6. Optical clearing and fluorescence deep-tissue imaging for 3D quantitative analysis of the brain tumor microenvironment

    NARCIS (Netherlands)

    Lagerweij, Tonny; Dusoswa, Sophie A.; Negrean, Adrian; Hendrikx, Esther M.L.; de Vries, Helga E.; Kole, Jeroen; Garcia-Vallejo, Juan J.; Mansvelder, Huibert D; Vandertop, W. Peter; Noske, David P.; Tannous, Bakhos A.; Musters, René J P; van Kooyk, Yvette; Wesseling, Pieter; Zhao, Xi Wen; Wurdinger, Thomas

    2017-01-01

    Background: Three-dimensional visualization of the brain vasculature and its interactions with surrounding cells may shed light on diseases where aberrant microvascular organization is involved, including glioblastoma (GBM). Intravital confocal imaging allows 3D visualization of microvascular

  7. 3D visualization of the initial Yersinia ruckeri infection route in rainbow trout (Oncorhynchus mykiss) by optical projection tomography

    DEFF Research Database (Denmark)

    Otani, Maki; Villumsen, Kasper Rømer; Kragelund Strøm, Helene

    2014-01-01

    , optical projection tomography (OPT), a novel three-dimensional (3D) bio-imaging technique, was applied. OPT not only enables the visualization of Y. ruckeri on mucosal surfaces but also the 3D spatial distribution in whole organs, without sectioning. Rainbow trout were infected by bath challenge exposure...

  8. The solvent at antigen-binding site regulated C3d-CR2 interactions through the C-terminal tail of C3d at different ion strengths: insights from molecular dynamics simulation.

    Science.gov (United States)

    Zhang, Yan; Guo, Jingjing; Li, Lanlan; Liu, Xuewei; Yao, Xiaojun; Liu, Huanxiang

    2016-10-01

    The interactions of complement receptor 2 (CR2) and the degradation fragment C3d of complement component C3 play important links between the innate and adaptive immune systems. Due to the importance of C3d-CR2 interaction in the design of vaccines and inhibitors, a number of studies have been performed to investigate C3d-CR2 interaction. Many studies have indicated C3d-CR2 interactions are ionic strength-dependent. To investigate the molecular mechanism of C3d-CR2 interaction and the origin of effects of ionic strength, molecular dynamics simulations for C3d-CR2 complex together with the energetic and structural analysis were performed. Our results revealed the increased interactions between charged protein and ions weaken C3d-CR2 association, as ionic strengths increase. Moreover, ion strengths have similar effects on antigen-binding site and CR2 binding site. Meanwhile, Ala17 and Gln20 will transform between the activated and non-activated states mediated by His133 and Glu135 at different ion strengths. Our results reveal the origins of the effects of ionic strengths on C3d-CR2 interactions are due to the changes of water, ion occupancies and distributions. This study uncovers the origin of the effect of ionic strength on C3d-CR2 interaction and deepens the understanding of the molecular mechanism of their interaction, which is valuable for the design of vaccines and small molecule inhibitors. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. 3D Nondestructive Visualization and Evaluation of TRISO Particles Distribution in HTGR Fuel Pebbles Using Cone-Beam Computed Tomography

    Directory of Open Access Journals (Sweden)

    Gongyi Yu

    2017-01-01

    Full Text Available A nonuniform distribution of tristructural isotropic (TRISO particles within a high-temperature gas-cooled reactor (HTGR pebble may lead to excessive thermal gradients and nonuniform thermal expansion during operation. If the particles are closely clustered, local hotspots may form, leading to excessive stresses on particle layers and an increased probability of particle failure. Although X-ray digital radiography (DR is currently used to evaluate the TRISO distributions in pebbles, X-ray DR projection images are two-dimensional in nature, which would potentially miss some details for 3D evaluation. This paper proposes a method of 3D visualization and evaluation of the TRISO distribution in HTGR pebbles using cone-beam computed tomography (CBCT: first, a pebble is scanned on our high-resolution CBCT, and 2D cross-sectional images are reconstructed; secondly, all cross-sectional images are restructured to form the 3D model of the pebble; then, volume rendering is applied to segment and display the TRISO particles in 3D for visualization and distribution evaluation. For method validation, several pebbles were scanned and the 3D distributions of the TRISO particles within the pebbles were produced. Experiment results show that the proposed method provides more 3D than DR, which will facilitate pebble fabrication research and production quality control.

  10. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    Science.gov (United States)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as

  11. Illustrating Mathematics using 3D Printers

    OpenAIRE

    Knill, Oliver; Slavkovsky, Elizabeth

    2013-01-01

    3D printing technology can help to visualize proofs in mathematics. In this document we aim to illustrate how 3D printing can help to visualize concepts and mathematical proofs. As already known to educators in ancient Greece, models allow to bring mathematics closer to the public. The new 3D printing technology makes the realization of such tools more accessible than ever. This is an updated version of a paper included in book Low-Cost 3D Printing for science, education and Sustainable Devel...

  12. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    Science.gov (United States)

    2006-10-01

    Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment

  13. Optical clearing and fluorescence deep-tissue imaging for 3D quantitative analysis of the brain tumor microenvironment

    NARCIS (Netherlands)

    Lagerweij, Tonny; Dusoswa, Sophie A.; Negrean, Adrian; Hendrikx, Esther M. L.; de Vries, Helga E.; Kole, Jeroen; Garcia-Vallejo, Juan J.; Mansvelder, Huibert D.; Vandertop, W. Peter; Noske, David P.; Tannous, Bakhos A.; Musters, René J. P.; van Kooyk, Yvette; Wesseling, Pieter; Zhao, Xi Wen; Wurdinger, Thomas

    2017-01-01

    Three-dimensional visualization of the brain vasculature and its interactions with surrounding cells may shed light on diseases where aberrant microvascular organization is involved, including glioblastoma (GBM). Intravital confocal imaging allows 3D visualization of microvascular structures and

  14. An Interactive Visualization of the Past using a Situated Simulation Approach

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen; Madsen, Claus B.

    2013-01-01

    This paper describes aspects of the development of an interactive installation for visualizing a 3D reconstruction of a historical church chapel in Kolding, Denmark. We focus on three aspects inherent to a mobile Augmented Reality development con- text; 1) A procedure for combating gyroscope drift...... on handheld devices, 2) achieving realistic lighting computation on a mobile platform at interactive frame-rates and 3) an approach to re- location within this applications situated location without position tracking. We present a solution to each of these three aspects. The development is targeted a specific...... application, but the presented solutions should be relevant to researchers and developers facing similar issues in other contexts. We furthermore present initial findings from everyday usage by visitors at the museum, and explore how these findings can be useful in connection with novel technology...

  15. Interactive Visualization of National Airspace Data in 4D (IV4D)

    Science.gov (United States)

    2010-08-01

    System visualization, airspace visualization, air traffic visualization, air traffic management tools, airspace analysis tools 16. SECURITY ...the last visualization display up while the user creates and starts the next one; And, of course, the wringing out of every possible iota of

  16. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    Science.gov (United States)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  17. Data visualization with d3.js

    CERN Document Server

    Teller, Swizec

    2013-01-01

    This book is a mini tutorial with plenty of code examples and strategies to give you many options when building your own visualizations.This book is ideal for anyone interested in data visualization. Some rudimentary knowledge of JavaScript is required.

  18. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    Science.gov (United States)

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  19. S4-3: Spatial Processing of Visual Motion

    Directory of Open Access Journals (Sweden)

    Shin'ya Nishida

    2012-10-01

    Full Text Available Local motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of local motion signals. Directionally ambiguous 1D local motion signals are pooled across orientation and space for solution of the aperture problem, while 2D local motion signals are pooled for estimation of global vector average (e.g., Amano et al., 2009 Journal of Vision 9(3:4 1–25. Second, when stimulus presentation is brief, coherent motion detection of dynamic random-dot kinematogram is not efficient. Nevertheless, it is significantly improved by transient and synchronous presentation of a stationary surround pattern. This suggests that centre-surround spatial interaction may help rapid perception of motion (Linares et al., submitted. Third, to know how the visual system encodes pairwise relationships between remote motion signals, we measured the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared with similar tasks with luminance or orientation signals, motion comparison was more rapid and hence efficient. This high performance was affected little by inter-element separation even when it was increased up to 100 deg. These findings indicate the existence of specialized processes to encode long-range relationships between motion signals for quick appreciation of global dynamic scene structure (Maruya et al., in preparation.

  20. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    Science.gov (United States)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  1. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    Science.gov (United States)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  2. 3D Geovisualization & Stylization to Manage Comprehensive and Participative Local Urban Plans

    Science.gov (United States)

    Brasebin, M.; Christophe, S.; Jacquinod, F.; Vinesse, A.; Mahon, H.

    2016-10-01

    3D geo-visualization is more and more used and appreciated to support public participation, and is generally used to present predesigned planned projects. Nevertheless, other participatory processes may benefit from such technology such as the elaboration of urban planning documents. In this article, we present one of the objectives of the PLU++ project: the design of a 3D geo-visualization system that eases the participation concerning local urban plans. Through a pluridisciplinary approach, it aims at covering the different aspects of such a system: the simulation of built configurations to represent regulation information, the efficient stylization of these objects to make people understand their meanings and the interaction between 3D simulation and stylization. The system aims at being adaptive according to the participation context and to the dynamic of the participation. It will offer the possibility to modify simulation results and the rendering styles of the 3D representations to support participation. The proposed 3D rendering styles will be used in a set of practical experiments in order to test and validate some hypothesis from past researches of the project members about 3D simulation, 3D semiotics and knowledge about uses.

  3. From 2D to 3D turbulence through 2D3C configurations

    Science.gov (United States)

    Buzzicotti, Michele; Biferale, Luca; Linkmann, Moritz

    2017-11-01

    We study analytically and numerically the geometry of the nonlinear interactions and the resulting energy transfer directions of 2D3C flows. Through a set of suitably designed Direct Numerical Simulations we also study the coupling between several 2D3C flows, where we explore the transition between 2D and fully 3D turbulence. In particular, we find that the coupling of three 2D3C flows on mutually orthogonal planes subject to small-scale forcing leads to a stationary 3D out-of-equilibrium dynamics at the energy containing scales where the inverse cascade is directly balanced by a forward cascade carried by a different subsets of interactions. ERC AdG Grant No 339032 NewTURB.

  4. Three ways to show 3D fluid flow

    NARCIS (Netherlands)

    Wijk, van J.J.; Hin, A.J.S.; Leeuw, de W.C.; Post, F.H.

    1994-01-01

    Visualizing 3D fluid flow fields presents a challenge to scientific visualization, mainly because no natural visual representation of 3D vector fields exists. We can readily recognize geometric objects, color, and texture: unfortunately for computational fluid dynamics (CFD) researchers, vector

  5. What you say matters: exploring visual-verbal interactions in visual working memory.

    Science.gov (United States)

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  6. D5.3 Interaction between currents, wave, structure and subsoil

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Sumer, B. Mutlu; Schouten, Jan-Joost

    2015-01-01

    This chapter gives an introduction to deliverable D5.3 - Interaction between currents, waves, structure and subsoil – with respect to the MERMAID project. The deliverable focuses on the conditions in European waters such as the four sites that is addressed in the MERMAID project. The most important...

  7. Interactive virtual simulation using a 3D computer graphics model for microvascular decompression surgery.

    Science.gov (United States)

    Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko

    2012-09-01

    The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p computer graphics model provided a realistic environment for performing virtual simulations prior to MVD surgery and enabled us to ascertain complex microsurgical anatomy.

  8. FUn: a framework for interactive visualizations of large, high-dimensional datasets on the web.

    Science.gov (United States)

    Probst, Daniel; Reymond, Jean-Louis

    2018-04-15

    During the past decade, big data have become a major tool in scientific endeavors. Although statistical methods and algorithms are well-suited for analyzing and summarizing enormous amounts of data, the results do not allow for a visual inspection of the entire data. Current scientific software, including R packages and Python libraries such as ggplot2, matplotlib and plot.ly, do not support interactive visualizations of datasets exceeding 100 000 data points on the web. Other solutions enable the web-based visualization of big data only through data reduction or statistical representations. However, recent hardware developments, especially advancements in graphical processing units, allow for the rendering of millions of data points on a wide range of consumer hardware such as laptops, tablets and mobile phones. Similar to the challenges and opportunities brought to virtually every scientific field by big data, both the visualization of and interaction with copious amounts of data are both demanding and hold great promise. Here we present FUn, a framework consisting of a client (Faerun) and server (Underdark) module, facilitating the creation of web-based, interactive 3D visualizations of large datasets, enabling record level visual inspection. We also introduce a reference implementation providing access to SureChEMBL, a database containing patent information on more than 17 million chemical compounds. The source code and the most recent builds of Faerun and Underdark, Lore.js and the data preprocessing toolchain used in the reference implementation, are available on the project website (http://doc.gdb.tools/fun/). daniel.probst@dcb.unibe.ch or jean-louis.reymond@dcb.unibe.ch.

  9. Hund’s Rule-Driven Dzyaloshinskii-Moriya Interaction at 3d−5d Interfaces

    KAUST Repository

    Belabbes, Abderrezak; Bihlmayer, G.; Bechstedt, F.; Blü gel, S.; Manchon, Aurelien

    2016-01-01

    Using relativistic first-principles calculations, we show that the chemical trend of the Dzyaloshinskii-Moriya interaction (DMI) in 3d-5d ultrathin films follows Hund's first rule with a tendency similar to their magnetic moments in either the unsupported 3d monolayers or 3d-5d interfaces. We demonstrate that, besides the spin-orbit coupling (SOC) effect in inversion asymmetric noncollinear magnetic systems, the driving force is the 3d orbital occupations and their spin-flip mixing processes with the spin-orbit active 5d states control directly the sign and magnitude of the DMI. The magnetic chirality changes are discussed in the light of the interplay between SOC, Hund's first rule, and the crystal-field splitting of d orbitals. © 2016 American Physical Society.

  10. A Mobile Personal Informatics System with Interactive Visualizations of Mobility and Social Interactions

    DEFF Research Database (Denmark)

    Cuttone, Andrea; Jørgensen, Sune Lehmann; Larsen, Jakob Eg

    2013-01-01

    We describe a personal informatics system for Android smartphones that provides personal data on mobility and social interactions through interactive visualization interfaces. The mobile app has been made available to N=136 first year university students as part of a study of social network...... interactions in a university campus setting. The design of the interactive visualization interfaces enabling the participants to gain insights into own behaviors is described. We report initial findings based on device logging of participant interactions with the interactive visualization app on the smartphone...

  11. Tweek: Merging 2D and 3D Interaction in Immersive Environments

    Directory of Open Access Journals (Sweden)

    Patrick L Hartling

    2003-06-01

    Full Text Available Developers of virtual environments (VEs face an oftendifficult problem: users must have some way to interact with the virtual world. The application designers must determine how to map available inputs (button presses, hand gestures, etc. to actions within the VE. As a result, interaction within a VE is perhaps the most limiting factor for the development of complex virtual reality (VR applications. For example, interactions with large amounts of data, alphanumeric information, or abstract operations may not map well to current VR interaction methods, which are primarily spatial. Instead, twodimensional (2D interaction could be more effective. Current practices often involve the development of customized interfaces for each application. The custom interfaces try to match the capabilities of the available input devices. To address these issues, we have developed a middleware tool called Tweek. Tweek presents users with an extensible 2D Java graphical user interface (GUI that communicates with VR applications. Using this tool, developers are free to create a GUI that provides extended capabilities for interacting with a VE. This paper covers in detail the design of Tweek and its use with VR Juggler, an open source virtual reality development tool.

  12. Visual grading of 2D and 3D functional MRI compared with image-based descriptive measures

    Energy Technology Data Exchange (ETDEWEB)

    Ragnehed, Mattias [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Department of Medical and Health Sciences, Division of Radiological Sciences/Radiology, Faculty of Health Sciences, Linkoeping (Sweden); Leinhard, Olof Dahlqvist; Pihlsgaard, Johan; Lundberg, Peter [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Division of Radiological Sciences, Radiation Physics, IMH, Linkoeping (Sweden); Wirell, Staffan [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Soekjer, Hannibal; Faegerstam, Patrik [Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Jiang, Bo [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Smedby, Oerjan; Engstroem, Maria [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden)

    2010-03-15

    A prerequisite for successful clinical use of functional magnetic resonance imaging (fMRI) is the selection of an appropriate imaging sequence. The aim of this study was to compare 2D and 3D fMRI sequences using different image quality assessment methods. Descriptive image measures, such as activation volume and temporal signal-to-noise ratio (TSNR), were compared with results from visual grading characteristics (VGC) analysis of the fMRI results. Significant differences in activation volume and TSNR were not directly reflected by differences in VGC scores. The results suggest that better performance on descriptive image measures is not always an indicator of improved diagnostic quality of the fMRI results. In addition to descriptive image measures, it is important to include measures of diagnostic quality when comparing different fMRI data acquisition methods. (orig.)

  13. Functional selectivity of allosteric interactions within G protein-coupled receptor oligomers: the dopamine D1-D3 receptor heterotetramer.

    Science.gov (United States)

    Guitart, Xavier; Navarro, Gemma; Moreno, Estefania; Yano, Hideaki; Cai, Ning-Sheng; Sánchez-Soto, Marta; Kumar-Barodia, Sandeep; Naidu, Yamini T; Mallol, Josefa; Cortés, Antoni; Lluís, Carme; Canela, Enric I; Casadó, Vicent; McCormick, Peter J; Ferré, Sergi

    2014-10-01

    The dopamine D1 receptor-D3 receptor (D1R-D3R) heteromer is being considered as a potential therapeutic target for neuropsychiatric disorders. Previous studies suggested that this heteromer could be involved in the ability of D3R agonists to potentiate locomotor activation induced by D1R agonists. It has also been postulated that its overexpression plays a role in L-dopa-induced dyskinesia and in drug addiction. However, little is known about its biochemical properties. By combining bioluminescence resonance energy transfer, bimolecular complementation techniques, and cell-signaling experiments in transfected cells, evidence was obtained for a tetrameric stoichiometry of the D1R-D3R heteromer, constituted by two interacting D1R and D3R homodimers coupled to Gs and Gi proteins, respectively. Coactivation of both receptors led to the canonical negative interaction at the level of adenylyl cyclase signaling, to a strong recruitment of β-arrestin-1, and to a positive cross talk of D1R and D3R agonists at the level of mitogen-activated protein kinase (MAPK) signaling. Furthermore, D1R or D3R antagonists counteracted β-arrestin-1 recruitment and MAPK activation induced by D3R and D1R agonists, respectively (cross-antagonism). Positive cross talk and cross-antagonism at the MAPK level were counteracted by specific synthetic peptides with amino acid sequences corresponding to D1R transmembrane (TM) domains TM5 and TM6, which also selectively modified the quaternary structure of the D1R-D3R heteromer, as demonstrated by complementation of hemiproteins of yellow fluorescence protein fused to D1R and D3R. These results demonstrate functional selectivity of allosteric modulations within the D1R-D3R heteromer, which can be involved with the reported behavioral synergism of D1R and D3R agonists. U.S. Government work not protected by U.S. copyright.

  14. Functional Selectivity of Allosteric Interactions within G Protein–Coupled Receptor Oligomers: The Dopamine D1-D3 Receptor Heterotetramer

    Science.gov (United States)

    Guitart, Xavier; Navarro, Gemma; Moreno, Estefania; Yano, Hideaki; Cai, Ning-Sheng; Sánchez-Soto, Marta; Kumar-Barodia, Sandeep; Naidu, Yamini T.; Mallol, Josefa; Cortés, Antoni; Lluís, Carme; Canela, Enric I.; Casadó, Vicent; McCormick, Peter J.

    2014-01-01

    The dopamine D1 receptor–D3 receptor (D1R-D3R) heteromer is being considered as a potential therapeutic target for neuropsychiatric disorders. Previous studies suggested that this heteromer could be involved in the ability of D3R agonists to potentiate locomotor activation induced by D1R agonists. It has also been postulated that its overexpression plays a role in L-dopa–induced dyskinesia and in drug addiction. However, little is known about its biochemical properties. By combining bioluminescence resonance energy transfer, bimolecular complementation techniques, and cell-signaling experiments in transfected cells, evidence was obtained for a tetrameric stoichiometry of the D1R–D3R heteromer, constituted by two interacting D1R and D3R homodimers coupled to Gs and Gi proteins, respectively. Coactivation of both receptors led to the canonical negative interaction at the level of adenylyl cyclase signaling, to a strong recruitment of β-arrestin-1, and to a positive cross talk of D1R and D3R agonists at the level of mitogen-activated protein kinase (MAPK) signaling. Furthermore, D1R or D3R antagonists counteracted β-arrestin-1 recruitment and MAPK activation induced by D3R and D1R agonists, respectively (cross-antagonism). Positive cross talk and cross-antagonism at the MAPK level were counteracted by specific synthetic peptides with amino acid sequences corresponding to D1R transmembrane (TM) domains TM5 and TM6, which also selectively modified the quaternary structure of the D1R-D3R heteromer, as demonstrated by complementation of hemiproteins of yellow fluorescence protein fused to D1R and D3R. These results demonstrate functional selectivity of allosteric modulations within the D1R-D3R heteromer, which can be involved with the reported behavioral synergism of D1R and D3R agonists. PMID:25097189

  15. 3D visualization of two-phase flow in the micro-tube by a simple but effective method

    International Nuclear Information System (INIS)

    Fu, X; Zhang, P; Hu, H; Huang, C J; Huang, Y; Wang, R Z

    2009-01-01

    The present study provides a simple but effective method for 3D visualization of the two-phase flow in the micro-tube. An isosceles right-angle prism combined with a mirror located 45° bevel to the prism is employed to synchronously obtain the front and side views of the flow patterns with a single camera, where the locations of the prism and the micro-tube for clear imaging should satisfy a fixed relationship which is specified in the present study. The optical design is proven successfully by the tough visualization work at the cryogenic temperature range. The image deformation due to the refraction and geometrical configuration of the test section is quantitatively investigated. It is calculated that the image is enlarged by about 20% in inner diameter compared to the real object, which is validated by the experimental results. Meanwhile, the image deformation by adding a rectangular optical correction box outside the circular tube is comparatively investigated. It is calculated that the image is reduced by about 20% in inner diameter with a rectangular optical correction box compared to the real object. The 3D re-construction process based on the two views is conducted through three steps, which shows that the 3D visualization method can easily be applied for two-phase flow research in micro-scale channels and improves the measurement accuracy of some important parameters of the two-phase flow such as void fraction, spatial distribution of bubbles, etc

  16. 3D VISUALIZATION FOR VIRTUAL MUSEUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    M. Skamantzari

    2016-06-01

    Full Text Available The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  17. Discussion on the 3D visualizing of 1:200 000 geological map

    Science.gov (United States)

    Wang, Xiaopeng

    2018-01-01

    Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.

  18. A MATLAB®-based program for 3D visualization of stratigraphic setting and subsidence evolution of sedimentary basins: example application to the Vienna Basin

    Science.gov (United States)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2015-04-01

    In recent years, 3D visualization of sedimentary basins has become increasingly popular. Stratigraphic and structural mapping is highly important to understand the internal setting of sedimentary basins. And subsequent subsidence analysis provides significant insights for basin evolution. This study focused on developing a simple and user-friendly program which allows geologists to analyze and model sedimentary basin data. The developed program is aimed at stratigraphic and subsidence modelling of sedimentary basins from wells or stratigraphic profile data. This program is mainly based on two numerical methods; surface interpolation and subsidence analysis. For surface visualization four different interpolation techniques (Linear, Natural, Cubic Spline, and Thin-Plate Spline) are provided in this program. The subsidence analysis consists of decompaction and backstripping techniques. The numerical methods are computed in MATLAB® which is a multi-paradigm numerical computing environment used extensively in academic, research, and industrial fields. This program consists of five main processing steps; 1) setup (study area and stratigraphic units), 2) loading of well data, 3) stratigraphic modelling (depth distribution and isopach plots), 4) subsidence parameter input, and 5) subsidence modelling (subsided depth and subsidence rate plots). The graphical user interface intuitively guides users through all process stages and provides tools to analyse and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the visualization results using the full range of available plot options in MATLAB. All functions of this program are illustrated with a case study of Miocene sediments in the Vienna Basin. The basin is an ideal place to test this program, because sufficient data is

  19. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    Science.gov (United States)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  20. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    Science.gov (United States)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  1. EVALUATION OF THE USER STRATEGY ON 2D AND 3D CITY MAPS BASED ON NOVEL SCANPATH COMPARISON METHOD AND GRAPH VISUALIZATION

    Directory of Open Access Journals (Sweden)

    J. Dolezalova

    2016-06-01

    Full Text Available The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents’ task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc. or type of stimuli (2D, 3D map.

  2. Evaluation of the User Strategy on 2d and 3d City Maps Based on Novel Scanpath Comparison Method and Graph Visualization

    Science.gov (United States)

    Dolezalova, J.; Popelka, S.

    2016-06-01

    The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents' task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc.) or type of stimuli (2D, 3D map).

  3. 3D-reconstructions and virtual 4D-visualization to study metamorphic brain development in the sphinx moth Manduca sexta

    Directory of Open Access Journals (Sweden)

    Wolf Huetteroth

    2010-03-01

    Full Text Available During metamorphosis, the transition from the larva to the adult, the insect brain undergoes considerable remodeling: New neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  4. Visualizing topography: Effects of presentation strategy, gender, and spatial ability

    Science.gov (United States)

    McAuliffe, Carla

    2003-10-01

    This study investigated the effect of different presentation strategies (2-D static visuals, 3-D animated visuals, and 3-D interactive, animated visuals) and gender on achievement, time-spent-on visual treatment, and attitude during a computer-based science lesson about reading and interpreting topographic maps. The study also examined the relationship of spatial ability and prior knowledge to gender, achievement, and time-spent-on visual treatment. Students enrolled in high school chemistry-physics were pretested and given two spatial ability tests. They were blocked by gender and randomly assigned to one of three levels of presentation strategy or the control group. After controlling for the effects of spatial ability and prior knowledge with analysis of covariance, three significant differences were found between the versions: (a) the 2-D static treatment group scored significantly higher on the posttest than the control group; (b) the 3-D animated treatment group scored significantly higher on the posttest than the control group; and (c) the 2-D static treatment group scored significantly higher on the posttest than the 3-D interactive animated treatment group. Furthermore, the 3-D interactive animated treatment group spent significantly more time on the visual screens than the 2-D static treatment group. Analyses of student attitudes revealed that most students felt the landform visuals in the computer-based program helped them learn, but not in a way they would describe as fun. Significant differences in attitude were found by treatment and by gender. In contrast to findings from other studies, no gender differences were found on either of the two spatial tests given in this study. Cognitive load, cognitive involvement, and solution strategy are offered as three key factors that may help explain the results of this study. Implications for instructional design include suggestions about the use of 2-D static, 3-D animated and 3-D interactive animations as well

  5. Putting it in perspective: designing a 3D visualization to contextualize indigenous knowledge in rural Namibia

    DEFF Research Database (Denmark)

    Jensen, Kasper L; Winschiers-Theophilus, Heike; Rodil, Kasper

    2012-01-01

    As part of a long-term research and co-design project we are creating a 3D visualization interface for an indigenous knowledge (IK) management system with rural dwellers of the Herero tribe in Namibia. Evaluations of earlier prototypes and theories on cultural differences in perception led us...

  6. Visualization and targeted disruption of protein interactions in living cells

    Science.gov (United States)

    Herce, Henry D.; Deng, Wen; Helma, Jonas; Leonhardt, Heinrich; Cardoso, M. Cristina

    2013-01-01

    Protein–protein interactions are the basis of all processes in living cells, but most studies of these interactions rely on biochemical in vitro assays. Here we present a simple and versatile fluorescent-three-hybrid (F3H) strategy to visualize and target protein–protein interactions. A high-affinity nanobody anchors a GFP-fusion protein of interest at a defined cellular structure and the enrichment of red-labelled interacting proteins is measured at these sites. With this approach, we visualize the p53–HDM2 interaction in living cells and directly monitor the disruption of this interaction by Nutlin 3, a drug developed to boost p53 activity in cancer therapy. We further use this approach to develop a cell-permeable vector that releases a highly specific peptide disrupting the p53 and HDM2 interaction. The availability of multiple anchor sites and the simple optical readout of this nanobody-based capture assay enable systematic and versatile analyses of protein–protein interactions in practically any cell type and species. PMID:24154492

  7. A 3D network of helicates fully assembled by pi-stacking interactions.

    Science.gov (United States)

    Vázquez, Miguel; Taglietti, Angelo; Gatteschi, Dante; Sorace, Lorenzo; Sangregorio, Claudio; González, Ana M; Maneiro, Marcelino; Pedrido, Rosa M; Bermejo, Manuel R

    2003-08-07

    The neutral dinuclear dihelicate [Cu2(L)2] x 2CH3CN (1) forms a unique 3D network in the solid state due to pi-stacking interactions, which are responsible for intermolecular antiferromagnetic coupling between Cu(II) ions.

  8. Hund’s Rule-Driven Dzyaloshinskii-Moriya Interaction at 3d−5d Interfaces

    KAUST Repository

    Belabbes, Abderrezak

    2016-12-09

    Using relativistic first-principles calculations, we show that the chemical trend of the Dzyaloshinskii-Moriya interaction (DMI) in 3d-5d ultrathin films follows Hund\\'s first rule with a tendency similar to their magnetic moments in either the unsupported 3d monolayers or 3d-5d interfaces. We demonstrate that, besides the spin-orbit coupling (SOC) effect in inversion asymmetric noncollinear magnetic systems, the driving force is the 3d orbital occupations and their spin-flip mixing processes with the spin-orbit active 5d states control directly the sign and magnitude of the DMI. The magnetic chirality changes are discussed in the light of the interplay between SOC, Hund\\'s first rule, and the crystal-field splitting of d orbitals. © 2016 American Physical Society.

  9. 3D FE simulation of PCMI (Pellet-Cladding Mechanical Interaction) considering frictionless contact

    International Nuclear Information System (INIS)

    Seo, Sang-Kyu; Lee, Sung-Uk; Lee, Eun-Ho; Yang, Dong-Yol; Kim, Hyo-Chan; Yang, Yong-Sik

    2014-01-01

    The goal of this code is coupling every aspect of physical phenomenon. Monodimensional FE model has been made for METEOR. It is good to evaluate the global behavior in high burn up levels. However, the multi-dimensional PCI analysis code is necessary to precisely analyze the stress distribution especially in case of the crack analysis. CAST3M 3D finite element code has been developed considering thermo-mechanical interaction in detail for TOUTATIS code. The advanced multidimensional code called ALCYONE has been developed considering chemical-physics and thermomechanical aspects. Although there are many codes that analyze pellet and cladding interaction, it is difficult to consider every physical aspect. In this paper, pellet to cladding mechanical interaction in 3D has been simulated with frictionless contact using the developed module, which is written in FORTRANN90. In this paper, 3D PCMI FE model is simulated with frictionless contact and elastic deformation. From the frictionless contact analysis, the interfacial pressure has been calculated and then this is used to obtain the solid heat coefficient which is a main factor to analyze the thermal distribution

  10. D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges

    Science.gov (United States)

    Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.

    2017-11-01

    3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.

  11. Not Just a Game … When We Play Together, We Learn Together: Interactive Virtual Environments and Gaming Engines for Geospatial Visualization

    Science.gov (United States)

    Shipman, J. S.; Anderson, J. W.

    2017-12-01

    An ideal tool for ecologists and land managers to investigate the impacts of both projected environmental changes and policy alternatives is the creation of immersive, interactive, virtual landscapes. As a new frontier in visualizing and understanding geospatial data, virtual landscapes require a new toolbox for data visualization that includes traditional GIS tools and uncommon tools such as the Unity3d game engine. Game engines provide capabilities to not only explore data but to build and interact with dynamic models collaboratively. These virtual worlds can be used to display and illustrate data that is often more understandable and plausible to both stakeholders and policy makers than is achieved using traditional maps.Within this context we will present funded research that has been developed utilizing virtual landscapes for geographic visualization and decision support among varied stakeholders. We will highlight the challenges and lessons learned when developing interactive virtual environments that require large multidisciplinary team efforts with varied competences. The results will emphasize the importance of visualization and interactive virtual environments and the link with emerging research disciplines within Visual Analytics.

  12. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    Science.gov (United States)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  13. The 3D LAOKOON--Visual and Verbal in 3D Online Learning Environments.

    Science.gov (United States)

    Liestol, Gunnar

    This paper reports on a project where three-dimensional (3D) online gaming environments were exploited for the purpose of academic communication and learning. 3D gaming environments are media and meaning rich and can provide inexpensive solutions for educational purposes. The experiment with teaching and discussions in this setting, however,…

  14. Complex interactions between human myoblasts and the surrounding 3D fibrin-based matrix.

    Directory of Open Access Journals (Sweden)

    Stéphane Chiron

    Full Text Available Anchorage of muscle cells to the extracellular matrix is crucial for a range of fundamental biological processes including migration, survival and differentiation. Three-dimensional (3D culture has been proposed to provide a more physiological in vitro model of muscle growth and differentiation than routine 2D cultures. However, muscle cell adhesion and cell-matrix interplay of engineered muscle tissue remain to be determined. We have characterized cell-matrix interactions in 3D muscle culture and analyzed their consequences on cell differentiation. Human myoblasts were embedded in a fibrin matrix cast between two posts, cultured until confluence, and then induced to differentiate. Myoblasts in 3D aligned along the longitudinal axis of the gel. They displayed actin stress fibers evenly distributed around the nucleus and a cortical mesh of thin actin filaments. Adhesion sites in 3D were smaller in size than in rigid 2D culture but expression of adhesion site proteins, including α5 integrin and vinculin, was higher in 3D compared with 2D (p<0.05. Myoblasts and myotubes in 3D exhibited thicker and ellipsoid nuclei instead of the thin disk-like shape of the nuclei in 2D (p<0.001. Differentiation kinetics were faster in 3D as demonstrated by higher mRNA concentrations of α-actinin and myosin. More important, the elastic modulus of engineered muscle tissues increased significantly from 3.5 ± 0.8 to 7.4 ± 4.7 kPa during proliferation (p<0.05 and reached 12.2 ± 6.0 kPa during differentiation (p<0.05, thus attesting the increase of matrix stiffness during proliferation and differentiation of the myocytes. In conclusion, we reported modulations of the adhesion complexes, the actin cytoskeleton and nuclear shape in 3D compared with routine 2D muscle culture. These findings point to complex interactions between muscle cells and the surrounding matrix with dynamic regulation of the cell-matrix stiffness.

  15. Canine neuroanatomy: Development of a 3D reconstruction and interactive application for undergraduate veterinary education.

    Science.gov (United States)

    Raffan, Hazel; Guevar, Julien; Poyade, Matthieu; Rea, Paul M

    2017-01-01

    Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce

  16. 3D pattern of brain atrophy in HIV/AIDS visualized using tensor-based morphometry

    Science.gov (United States)

    Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2011-01-01

    35% of HIV-infected patients have cognitive impairment, but the profile of HIV-induced brain damage is still not well understood. Here we used tensor-based morphometry (TBM) to visualize brain deficits and clinical/anatomical correlations in HIV/AIDS. To perform TBM, we developed a new MRI-based analysis technique that uses fluid image warping, and a new α-entropy-based information-theoretic measure of image correspondence, called the Jensen–Rényi divergence (JRD). Methods 3D T1-weighted brain MRIs of 26 AIDS patients (CDC stage C and/or 3 without HIV-associated dementia; 47.2 ± 9.8 years; 25M/1F; CD4+ T-cell count: 299.5 ± 175.7/µl; log10 plasma viral load: 2.57 ± 1.28 RNA copies/ml) and 14 HIV-seronegative controls (37.6 ± 12.2 years; 8M/6F) were fluidly registered by applying forces throughout each deforming image to maximize the JRD between it and a target image (from a control subject). The 3D fluid registration was regularized using the linearized Cauchy–Navier operator. Fine-scale volumetric differences between diagnostic groups were mapped. Regions were identified where brain atrophy correlated with clinical measures. Results Severe atrophy (~15–20% deficit) was detected bilaterally in the primary and association sensorimotor areas. Atrophy of these regions, particularly in the white matter, correlated with cognitive impairment (P=0.033) and CD4+ T-lymphocyte depletion (P=0.005). Conclusion TBM facilitates 3D visualization of AIDS neuropathology in living patients scanned with MRI. Severe atrophy in frontoparietal and striatal areas may underlie early cognitive dysfunction in AIDS patients, and may signal the imminent onset of AIDS dementia complex. PMID:17035049

  17. Interaction of 3D dewetting nanodroplets on homogeneous and chemically heterogeneous substrates

    International Nuclear Information System (INIS)

    Asgari, M; Moosavi, A

    2014-01-01

    Long-time interaction of dewetting nanodroplets is investigated using a long-wave approximation method. Although three-dimensional (3D) droplets evolution dynamics exhibits qualitative behavior analogous to two-dimensional (2D) dynamics, there is an extensive quantitative difference between them. 3D dynamics is substantially faster than 2D dynamics. This can be related to the larger curvature and, as a consequence, the larger Laplace pressure difference between the droplets in 3D systems. The influence of various chemical heterogeneities on the behavior of droplets has also been studied. In the case of gradient surfaces, it is shown how the gradient direction could change the dynamics. For a chemical step located between the droplets, the dynamics is enhanced or weakened depending on the initial configuration of the system. (paper)

  18. 3D visualization of optical ray aberration and its broadcasting to smartphones by ray aberration generator

    Science.gov (United States)

    Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru

    2017-11-01

    The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.

  19. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    Science.gov (United States)

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Novel 3D Approach to Flare Modeling via Interactive IDL Widget Tools

    Science.gov (United States)

    Nita, G. M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A.; Kontar, E. P.

    2011-12-01

    Currently, and soon-to-be, available sophisticated 3D models of particle acceleration and transport in solar flares require a new level of user-friendly visualization and analysis tools allowing quick and easy adjustment of the model parameters and computation of realistic radiation patterns (images, spectra, polarization, etc). We report the current state of the art of these tools in development, already proved to be highly efficient for the direct flare modeling. We present an interactive IDL widget application intended to provide a flexible tool that allows the user to generate spatially resolved radio and X-ray spectra. The object-based architecture of this application provides full interaction with imported 3D magnetic field models (e.g., from an extrapolation) that may be embedded in a global coronal model. Various tools provided allow users to explore the magnetic connectivity of the model by generating magnetic field lines originating in user-specified volume positions. Such lines may serve as reference lines for creating magnetic flux tubes, which are further populated with user-defined analytical thermal/non thermal particle distribution models. By default, the application integrates IDL callable DLL and Shared libraries containing fast GS emission codes developed in FORTRAN and C++ and soft and hard X-ray codes developed in IDL. However, the interactive interface allows interchanging these default libraries with any user-defined IDL or external callable codes designed to solve the radiation transfer equation in the same or other wavelength ranges of interest. To illustrate the tool capacity and generality, we present a step-by-step real-time computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data obtained by NORH and RHESSI instruments. We discuss further anticipated developments of the tools needed to accommodate

  1. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    Science.gov (United States)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  2. Comparison of 3D TOF-MRA and 3D CE-MRA at 3 T for imaging of intracranial aneurysms

    International Nuclear Information System (INIS)

    Cirillo, Mario; Scomazzoni, Francesco; Cirillo, Luigi; Cadioli, Marcello; Simionato, Franco; Iadanza, Antonella; Kirchin, Miles; Righi, Claudio; Anzalone, Nicoletta

    2013-01-01

    Purpose: To compare 3 T elliptical-centric CE MRA with 3 T TOF MRA for the detection and characterization of unruptured intracranial aneurysms (UIAs), by using digital subtracted angiography (DSA) as reference. Materials and methods: Twenty-nine patients (12 male, 17 female; mean age: 62 years) with 41 aneurysms (34 saccular, 7 fusiform; mean diameter: 8.85 mm [range 2.0–26.4 mm]) were evaluated with MRA at 3 T each underwent 3D TOF-MRA examination without contrast and then a 3D contrast-enhanced (CE-MRA) examination with 0.1 mmol/kg bodyweight gadobenate dimeglumine and k-space elliptic mapping (Contrast ENhanced Timing Robust Angiography [CENTRA]). Both TOF and CE-MRA images were used to evaluate morphologic features that impact the risk of rupture and the selection of a treatment. Almost half (20/41) of UIAs were located in the internal carotid artery, 7 in the anterior communicating artery, 9 in the middle cerebral artery and 4 in the vertebro-basilar arterial system. All patients also underwent DSA before or after the MR examination. Results: The CE-MRA results were in all cases consistent with the DSA dataset. No differences were noted between 3D TOF-MRA and CE-MRA concerning the detection and location of the 41 aneurysms or visualization of the parental artery. Differences were apparent concerning the visualization of morphologic features, especially for large aneurysms (>13 mm). An irregular sac shape was demonstrated for 21 aneurysms on CE-MRA but only 13/21 aneurysms on 3D TOF-MRA. Likewise, CE-MRA permitted visualization of an aneurismal neck and calculation of the sac/neck ratio for all 34 aneurysms with a neck demonstrated at DSA. Conversely, a neck was visible for only 24/34 aneurysms at 3D TOF-MRA. 3D CE-MRA detected 15 aneurysms with branches originating from the sac and/or neck, whereas branches were recognized in only 12/15 aneurysms at 3D TOF-MRA. Conclusion: For evaluation of intracranial aneurysms at 3 T, 3D CE-MRA is superior to 3D TOF

  3. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  4. Development and validation of an interactive efficient dose rates distribution calculation program ARShield for visualization of radiation field in nuclear power plants

    International Nuclear Information System (INIS)

    He, Shuxiang; Zhang, Han; Wang, Mengqi; Zang, Qiyong; Zhang, Jingyu; Chen, Yixue

    2017-01-01

    Point kernel integration (PKI) method is widely used in the visualization of radiation field in engineering applications because of the features of quickly dealing with large-scale complicated geometry space problems. But the traditional PKI programs have a lot of restrictions, such as complicated modeling, complicated source setting, 3D fine mesh results statistics and large-scale computing efficiency. To break the traditional restrictions for visualization of radiation field, ARShield was developed successfully. The results show that ARShield can deal with complicated plant radiation shielding problems for visualization of radiation field. Compared with SuperMC and QAD, it can be seen that the program is reliable and efficient. Also, ARShield can meet the demands of calculation speediness and interactive operations of modeling and displaying 3D geometries on a graphical user interface, avoiding error modeling in calculation and visualization. (authors)

  5. Virtual teeth: a 3D method for editing and visualizing small structures in CT scans

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Larsen, Per; Kreiborg, Sven

    1996-01-01

    The paper presents an interactive method for segmentation and visualization of small structures in CT scans. A combination of isosurface generation, spatial region growing and interactive graphics tools are used to extract small structures interactively. A practical example of segmentation of the...

  6. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  7. PointCloudExplore 2: Visual exploration of 3D gene expression

    Energy Technology Data Exchange (ETDEWEB)

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  8. Fragility of ferromagnetic double exchange interactions and pressure tuning of magnetism in 3 d -5 d double perovskite Sr2FeOsO6

    Science.gov (United States)

    Veiga, L. S. I.; Fabbris, G.; van Veenendaal, M.; Souza-Neto, N. M.; Feng, H. L.; Yamaura, K.; Haskel, D.

    2015-06-01

    The ability to tune exchange (magnetic) interactions between 3 d transition metals in perovskite structures has proven to be a powerful route to discovery of novel properties. Here we demonstrate that the introduction of 3 d -5 d exchange pathways in double perovskites enables additional tunability, a result of the large spatial extent of 5 d wave functions. Using x-ray probes of magnetism and structure at high pressure, we show that compression of Sr2FeOsO6 drives an unexpected continuous change in the sign of Fe-Os exchange interactions and a transition from antiferromagnetic to ferrimagnetic order. We analyze the relevant electron-electron interactions, shedding light into fundamental differences with the more thoroughly studied 3 d -3 d systems.

  9. Motivation and Academic Improvement Using Augmented Reality for 3D Architectural Visualization

    Directory of Open Access Journals (Sweden)

    David FONSECA ESCUDERO

    2016-05-01

    Full Text Available This paper discuss about the results from the evaluation of the motivation, user profile and level of satisfaction in the workflow using 3D augmented visualization of complex models in educational environments. The study shows the results of different experiments conducted with first and second year students from Architecture and Science and Construction Technologies (Old Spanish degree of Building Engineering, which is recognized at a European level. We have used a mixed method combining both quantitative and qualitative student assessment in order to complete a general overview of using new technologies, mobile devices and advanced visual methods in academic environments. The results show us how the students involved in the experiments improved their academic results and their implication in the subject, which allow us to conclude that the hybrid technologies improve both spatial skills and the student motivation, a key concept in the actual educational framework composed by digital-native students and a great range of different applications and interfaces useful for teaching and learning.

  10. SmartR: an open-source platform for interactive visual analytics for translational research data.

    Science.gov (United States)

    Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard

    2017-07-15

    In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR , a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR . reinhard.schneider@uni.lu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  11. Efficient Computation of Casimir Interactions between Arbitrary 3D Objects

    International Nuclear Information System (INIS)

    Reid, M. T. Homer; Rodriguez, Alejandro W.; White, Jacob; Johnson, Steven G.

    2009-01-01

    We introduce an efficient technique for computing Casimir energies and forces between objects of arbitrarily complex 3D geometries. In contrast to other recently developed methods, our technique easily handles nonspheroidal, nonaxisymmetric objects, and objects with sharp corners. Using our new technique, we obtain the first predictions of Casimir interactions in a number of experimentally relevant geometries, including crossed cylinders and tetrahedral nanoparticles.

  12. Investigations of the D-multi-ρ interactions

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, C.W. [Institut fuer Kernphysik (Theorie), Institute for Advanced Simulation, and Juelich Center for Hadron Physics, Forschungszentrum Juelich (Germany); Central South University, School of Physics and Electronics, Changsha (China)

    2017-09-15

    In the present work, which aims at searching for bound states, the interactions of the D-multi-ρ systems are investigated by means of the formalism of the fixed-center approximation to Faddeev equations. Reproducing the states of f{sub 2}(1270) and D{sub 1}(2420) dynamically in the two-body ρρ and ρD interactions, respectively, as the clusters of the fixed-center approximation, the state of D(3000){sup 0} is found as a molecule of D - f{sub 2} or ρ - D{sub 1} structures in the three-body interactions, where we determine its quantum number J{sup P} = 2{sup -} and find another possible state of D{sub 2}(3100) with isospin I = 3/2. In our results, there are some other predictions with uncertainties, a D{sub 3}(3160) state with I(J{sup P}) = (1)/(2)(3{sup +}) in the four-body interactions, a narrow D{sub 4}(3730) state with I(J{sup P}) = (1)/(2)(4{sup -}), a wide D{sub 4}(3410) state of I(J{sup P}) = (1)/(2)(4{sup -}), and another wide D{sub 4}(3770) state but with I(J{sup P}) = (3)/(2)(4{sup -}) in the five-body interactions, and a D{sub 5}(3570) state with I(J{sup P}) = (1)/(2)(5{sup +}) in the six-body interactions. (orig.)

  13. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    OpenAIRE

    Stephen eGrossberg

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in s...

  14. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  15. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  16. Interaction of charged 3D soliton with Coulomb center

    International Nuclear Information System (INIS)

    Rybakov, Yu.P.

    1996-03-01

    The Einstein - de Broglie particle-soliton concept is applied to simulate stationary states of an electron in a hydrogen atom. According to this concept, the electron is described by the localized regular solutions to some nonlinear equations. In the framework of Synge model for interacting scalar and electromagnetic fields a system of integral equations has been obtained, which describes the interaction between charged 3D soliton and Coulomb center. The asymptotic expressions for physical fields, describing soliton moving around the fixed Coulomb center, have been obtained with the help of integral equations. It is shown that the electron-soliton center travels along some stationary orbit around the Coulomb center. The electromagnetic radiation is absent as the Poynting vector has non-wave asymptote O(r -3 ) after averaging over angles, i.e. the existence of spherical surface corresponding to null Poynting vector stream, has been proved. Vector lines for Poynting vector are constructed in asymptotical area. (author). 22 refs, 2 figs

  17. Immersive Visualization of the Solid Earth

    Science.gov (United States)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs

  18. 3D MR cisternography to identify distal dural rings. Comparison of 3D-CISS and 3D-SPACE sequences

    International Nuclear Information System (INIS)

    Watanabe, Yoshiyuki; Makidono, Akari; Nakamura, Miho; Saida, Yukihisa

    2011-01-01

    The distal dural ring (DDR) is an anatomical landmark used to distinguish intra- and extradural aneurysms. We investigated identification of the DDR using 2 three-dimensional (3D) magnetic resonance (MR) cisternography sequences-3D constructive interference in steady state (CISS) and 3D sampling perfection with application optimized contrasts using different flip angle evolutions (SPACE)-at 3.0 tesla. Ten healthy adult volunteers underwent imaging with 3D-CISS, 3D-SPACE, and time-of-flight (TOF) MR angiography (TOF-MRA) sequences at 3.0T. We analyzed DDR identification and internal carotid artery (ICA) signal intensity and classified the shape of the carotid cave. We identified the DDR using both 3D-SPACE and 3D-CISS, with no significant difference between the sequences. Visualization of the outline of the ICA in the cavernous sinus (CS) was significantly clearer with 3D-SPACE than 3D-CISS. In the CS and petrous portions, signal intensity was lower with 3D-SPACE, and the flow void was poor with 3D-CISS in some subjects. We identified the DDR with both 3D-SPACE and 3D-CISS, but the superior contrast of the ICA in the CS using 3D-SPACE suggests the superiority of this sequence for evaluating the DDR. (author)

  19. Fall Prevention Self-Assessments Via Mobile 3D Visualization Technologies: Community Dwelling Older Adults' Perceptions of Opportunities and Challenges.

    Science.gov (United States)

    Hamm, Julian; Money, Arthur; Atwal, Anita

    2017-06-19

    In the field of occupational therapy, the assistive equipment provision process (AEPP) is a prominent preventive strategy used to promote independent living and to identify and alleviate fall risk factors via the provision of assistive equipment within the home environment. Current practice involves the use of paper-based forms that include 2D measurement guidance diagrams that aim to communicate the precise points and dimensions that must be measured in order to make AEPP assessments. There are, however, issues such as "poor fit" of equipment due to inaccurate measurements taken and recorded, resulting in more than 50% of equipment installed within the home being abandoned by patients. This paper presents a novel 3D measurement aid prototype (3D-MAP) that provides enhanced measurement and assessment guidance to patients via the use of 3D visualization technologies. The purpose of this study was to explore the perceptions of older adults with regard to the barriers and opportunities of using the 3D-MAP application as a tool that enables patient self-delivery of the AEPP. Thirty-three community-dwelling older adults participated in interactive sessions with a bespoke 3D-MAP application utilizing the retrospective think-aloud protocol and semistructured focus group discussions. The system usability scale (SUS) questionnaire was used to evaluate the application's usability. Thematic template analysis was carried out on the SUS item discussions, think-aloud, and semistructured focus group data. The quantitative SUS results revealed that the application may be described as having "marginal-high" and "good" levels of usability, along with strong agreement with items relating to the usability (P=.004) and learnability (Putility with regards to effectiveness, efficiency, accuracy, and reliability of measurements that are recorded using the application and to compare it with 2D measurement guidance leaflets. ©Julian Hamm, Arthur Money, Anita Atwal. Originally published in

  20. 4-D Visualization of Seismic and Geodetic Data of the Big Island of Hawai'i

    Science.gov (United States)

    Burstein, J. A.; Smith-Konter, B. R.; Aryal, A.

    2017-12-01

    For decades Hawai'i has served as a natural laboratory for studying complex interactions between magmatic and seismic processes. Investigating characteristics of these processes, as well as the crustal response to major Hawaiian earthquakes, requires a synthesis of seismic and geodetic data and models. Here, we present a 4-D visualization of the Big Island of Hawai'i that investigates geospatial and temporal relationships of seismicity, seismic velocity structure, and GPS crustal motions to known volcanic and seismically active features. Using the QPS Fledermaus visualization package, we compile 90 m resolution topographic data from NASA's Shuttle Radar Topography Mission (SRTM) and 50 m resolution bathymetric data from the Hawaiian Mapping Research Group (HMRG) with a high-precision earthquake catalog of more than 130,000 events from 1992-2009 [Matoza et al., 2013] and a 3-D seismic velocity model of Hawai'i [Lin et al., 2014] based on seismic data from the Hawaiian Volcano Observatory (HVO). Long-term crustal motion vectors are integrated into the visualization from HVO GPS time-series data. These interactive data sets reveal well-defined seismic structure near the summit areas of Mauna Loa and Kilauea volcanoes, where high Vp and high Vp/Vs anomalies at 5-12 km depth, as well as clusters of low magnitude (M data are also used to help identify seismic clusters associated with the steady crustal detachment of the south flank of Kilauea's East Rift Zone. We also investigate the fault geometry of the 2006 M6.7 Kiholo Bay earthquake event by analyzing elastic dislocation deformation modeling results [Okada, 1985] and HVO GPS and seismic data of this event. We demonstrate the 3-D fault mechanisms of the Kiholo Bay main shock as a combination of strike-slip and dip-slip components (net slip 0.55 m) delineating a 30 km east-west striking, southward-dipping fault plane, occurring at 39 km depth. This visualization serves as a resource for advancing scientific analyses of

  1. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  2. 3d particle simulations on ultra short laser interaction

    Energy Technology Data Exchange (ETDEWEB)

    Nishihara, Katsunobu; Okamoto, Takashi; Yasui, Hidekazu [Osaka Univ., Suita (Japan). Inst. of Laser Engineering

    1998-03-01

    Two topics related to ultra short laser interaction with matter, linear and nonlinear high frequency conductivity of a solid density hydrogen plasma and anisotropic self-focusing of an intense laser in an overdense plasma, have been investigated with the use of 3-d particle codes. Frequency dependence of linear conductivity in a dense plasma is obtained, which shows anomalous conductivity near plasma frequency. Since nonlinear conductivity decreases with v{sub o}{sup -3}, where v{sub o} is a quivering velocity, an optimum amplitude exists leading to a maximum electron heating. Anisotropic self-focusing of a linear polarized intense laser is observed in an overdense plasma. (author)

  3. 3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds

    Science.gov (United States)

    Leckey, C.; Hinders, M.

    2011-01-01

    Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.

  4. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  5. ADN-Viewer: a 3D approach for bioinformatic analyses of large DNA sequences.

    Science.gov (United States)

    Hérisson, Joan; Ferey, Nicolas; Gros, Pierre-Emmanuel; Gherbi, Rachid

    2007-01-20

    Most of biologists work on textual DNA sequences that are limited to the linear representation of DNA. In this paper, we address the potential offered by Virtual Reality for 3D modeling and immersive visualization of large genomic sequences. The representation of the 3D structure of naked DNA allows biologists to observe and analyze genomes in an interactive way at different levels. We developed a powerful software platform that provides a new point of view for sequences analysis: ADNViewer. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge amounts of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for user-friendly data exploration. In addition, one bioinformatics study scenario is proposed.

  6. Embryonic staging using a 3D virtual reality system

    NARCIS (Netherlands)

    C.M. Verwoerd-Dikkeboom (Christine); A.H.J. Koning (Anton); P.J. van der Spek (Peter); N. Exalto (Niek); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBACKGROUND: The aim of this study was to demonstrate that Carnegie Stages could be assigned to embryos visualized with a 3D virtual reality system. METHODS: We analysed 48 3D ultrasound scans of 19 IVF/ICSI pregnancies at 7-10 weeks' gestation. These datasets were visualized as 3D

  7. Visualizing UAS-collected imagery using augmented reality

    Science.gov (United States)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  8. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  9. Pionic 4f→3d transition in 181Ta, natural Re, and 209Bi and the strong interaction level shift and the strong interaction level shift and width of the pionic 3d state

    International Nuclear Information System (INIS)

    Konijn, J.; Panman, J.K.; Koch, J.H.; Doesburg, W. van; Ewan, G.T.; Johansson, T.; Tibell, G.; Fransson, K.; Tauscher, L.

    1979-01-01

    Owing to a powerful Compton-suppression technique it was possible to observe for the first time the pionic 4f→3d X-ray transition in elements heavier than A=150. The strong interaction monopole shifts epsilon 0 and widths GAMMA 0 as well as the quadrupole splitting of the 3d levels have been measured in Ta, Re and Bi. Thus in addition to the strongly shifted and broadened 5g→4f transitions, a second, strongly affected line is available for these elements. For the pionic 4f levels, standard optical potentials fit the strong interaction shifts and broadenings quite well. The now observed, deeper-lying 3d states in Ta, Re and Bi have shifts and widths that differ by a factor of 2 or more from the standard optical potential predictions. From the observed relative X-ray intensities of the pionic cascade the strong interaction widths of the 5g and 4f levels are also extracted. (Auth.)

  10. Applying Pragmatics Principles for Interaction with Visual Analytics.

    Science.gov (United States)

    Hoque, Enamul; Setlur, Vidya; Tory, Melanie; Dykeman, Isaac

    2018-01-01

    Interactive visual data analysis is most productive when users can focus on answering the questions they have about their data, rather than focusing on how to operate the interface to the analysis tool. One viable approach to engaging users in interactive conversations with their data is a natural language interface to visualizations. These interfaces have the potential to be both more expressive and more accessible than other interaction paradigms. We explore how principles from language pragmatics can be applied to the flow of visual analytical conversations, using natural language as an input modality. We evaluate the effectiveness of pragmatics support in our system Evizeon, and present design considerations for conversation interfaces to visual analytics tools.

  11. A new 3D immersed boundary method for non-Newtonian fluid-structure-interaction with application

    Science.gov (United States)

    Zhu, Luoding

    2017-11-01

    Motivated by fluid-structure-interaction (FSI) phenomena in life sciences (e.g., motions of sperm and cytoskeleton in complex fluids), we introduce a new immersed boundary method for FSI problems involving non-Newtonian fluids in three dimensions. The non-Newtonian fluids are modelled by the FENE-P model (including the Oldroyd-B model as an especial case) and numerically solved by a lattice Boltzmann scheme (the D3Q7 model). The fluid flow is modelled by the lattice Boltzmann equations and numerically solved by the D3Q19 model. The deformable structure and the fluid-structure-interaction are handled by the immersed boundary method. As an application, we study a FSI toy problem - interaction of an elastic plate (flapped at its leading edge and restricted nowhere else) with a non-Newtonian fluid in a 3D flow. Thanks to the support of NSF-DMS support under research Grant 1522554.

  12. The history of visual magic in computers how beautiful images are made in CAD, 3D, VR and AR

    CERN Document Server

    Peddie, Jon

    2013-01-01

    If you have ever looked at a fantastic adventure or science fiction movie, or an amazingly complex and rich computer game, or a TV commercial where cars or gas pumps or biscuits behaved liked people and wondered, ""How do they do that?"",  then you've experienced the magic of 3D worlds generated by a computer.3D in computers began as a way to represent automotive designs and illustrate the construction of molecules. 3D graphics use evolved to visualizations of simulated data and artistic representations of imaginary worlds. In order to overcome the processing limitations of the computer, graph

  13. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    Science.gov (United States)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…

  14. Magmatic Systems in 3-D

    Science.gov (United States)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  15. Collaborative interactive visualization: exploratory concept

    Science.gov (United States)

    Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric

    2015-05-01

    Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.

  16. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    Science.gov (United States)

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  17. Supporting interactive visual analytics of energy behavior in buildings through affine visualizations

    DEFF Research Database (Denmark)

    Nielsen, Matthias; Brewer, Robert S.; Grønbæk, Kaj

    2016-01-01

    Domain experts dealing with big data are typically not familiar with advanced data mining tools. This especially holds true for domain experts within energy management. In this paper, we introduce a visual analytics approach that empowers such users to visually analyze energy behavior based......Viz, that interactively maps data from real world buildings. It is an overview +detail inter-active visual analytics tool supporting both rapid ad hoc explorations and structured evaluation of hypotheses about patterns and anomalies in resource consumption data mixed with occupant survey data. We have evaluated...... the approach with five domain experts within energy management, and further with 10 data analytics experts and found that it was easily attainable and that it supported visual analysis of mixed consumption and survey data. Finally, we discuss future perspectives of affine visual analytics for mixed...

  18. Integrating 4-d light-sheet imaging with interactive virtual reality to recapitulate developmental cardiac mechanics and physiology

    Science.gov (United States)

    Ding, Yichen; Yu, Jing; Abiri, Arash; Abiri, Parinaz; Lee, Juhyun; Chang, Chih-Chiang; Baek, Kyung In; Sevag Packard, René R.; Hsiai, Tzung K.

    2018-02-01

    There currently is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3- dimensional (3-D) architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3-D and 4-D (3-D spatial + 1-D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods such as routine optical microscopes. We hereby demonstrate multi-scale applicability of VR-LSFM to 1) interrogate skin fibroblasts interacting with a hyaluronic acid-based hydrogel, 2) navigate through the endocardial trabecular network during zebrafish development, and 3) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation (BINS) algorithm with deformable image registration (DIR) to interface a VR environment for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution.

  19. Velocity-dependent changes of rotational axes in the non-visual control of unconstrained 3D arm motions.

    Science.gov (United States)

    Isableu, B; Rezzoug, N; Mallet, G; Bernardin, D; Gorce, P; Pagano, C C

    2009-12-29

    We examined the roles of inertial (e(3)), shoulder-centre of mass (SH-CM) and shoulder-elbow articular (SH-EL) rotation axes in the non-visual control of unconstrained 3D arm rotations. Subjects rotated the arm in elbow configurations that yielded either a constant or variable separation between these axes. We hypothesized that increasing the motion frequency and the task complexity would result in the limbs' rotational axis to correspond to e(3) in order to minimize rotational resistances. Results showed two velocity-dependent profiles wherein the rotation axis coincided with the SH-EL axis for S and I velocities and then in the F velocity shifted to either a SH-CM/e(3) trade-off axis for one profile, or to no preferential axis for the other. A third profile was velocity-independent, with the SH-CM/e(3) trade-off axis being adopted. Our results are the first to provide evidence that the rotational axis of a multi-articulated limb may change from a geometrical axis of rotation to a mass or inertia based axis as motion frequency increases. These findings are discussed within the framework of the minimum inertia tensor model (MIT), which shows that rotations about e(3) reduce the amount of joint muscle torque that must be produced by employing the interaction torque to assist movement.

  20. "Eyes On The Solar System": A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K. J.

    2011-10-01

    NASA's Jet Propulsion Laboratory is using videogame technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that runs inside a Web browser, was released worldwide late last year (solarsystem.nasa.gov/eyes). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft and NASA/ESA missions in action. Key scientific results illustrated with video presentations and supporting imagery are imbedded contextually into the solar system. The presentation will include a detailed demonstration of the software along with a description/discussion of how this technology can be adapted for education and public outreach, as well as a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," which can be viewed at climate.nasa.gov/Eyes.html.

  1. GammaModeler 3-D gamma-ray imaging technology

    International Nuclear Information System (INIS)

    2000-01-01

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  2. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    Science.gov (United States)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  3. 3D for Geosciences: Interactive Tangibles and Virtual Models

    Science.gov (United States)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of

  4. 3D visualization and simulation to enhance nuclear learning

    International Nuclear Information System (INIS)

    Dimitri-Hakim, R.

    2012-01-01

    The nuclear power industry is facing a very real challenge that affects its day-to-day activities: a rapidly aging workforce. For New Nuclear Build (NNB) countries, the challenge is even greater, having to develop a completely new workforce with little to no prior experience or exposure to nuclear power. The workforce replacement introduces workers of a new generation with different backgrounds and affinities than its predecessors. Major lifestyle differences between the new and the old generation of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve high level of retention. (author)

  5. Exploring individual user differences in the 2D/3D interaction with medical image data

    NARCIS (Netherlands)

    Zudilova-Seinstra, E.; van Schooten, B.; Suinesiaputra, A.; van der Geest, R.; van Dijk, B.; Reiber, J.; Sloot, P.

    2010-01-01

    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical

  6. Exploring individual user differences in the 2D/3D interaction with medical image data

    NARCIS (Netherlands)

    Zudilova-Seinstra, Elena; van Schooten, B.W.; Suinesiaputra, Avan; van der Geest, Rob; van Dijk, Elisabeth M.A.G.; Reiber, Johan; Sloot, Peter

    2009-01-01

    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical

  7. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  8. Development of 4D jaw movement visualization system for dental diagnosis support

    Science.gov (United States)

    Aoki, Yoshimitsu; Terajima, Masahiko; Nakasima, Akihiko

    2004-10-01

    A person with an asymmetric morphology of maxillofacial skeleton reportedly possesses an asymmetric jaw function and the risk to express temporomandibular disorder is high. A comprehensive analysis from the point of view of both the morphology and the function such as maxillofacial or temporomandibular joint morphology, dental occlusion, and features of mandibular movement pathways is essential. In this study, the 4D jaw movement visualization system was developed to visually understand the characteristic jaw movement, 3D maxillofacial skeleton structure, and the alignment of the upper and lower teeth of a patient. For this purpose, the 3D reconstructed images of the cranial and mandibular bones, obtained by computed tomography, were measured using a non-contact 3D measuring device, and the obtained morphological images of teeth model were integrated and activated on the 6 DOF jaw movement data. This system was experimentally applied and visualized in a jaw deformity patient and its usability as a clinical diagnostic support system was verified.

  9. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    Science.gov (United States)

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  10. 3D Analytical Calculation of the Interactions between Permanent Magnets

    OpenAIRE

    Allag , Hicham; Yonnet , Jean-Paul

    2008-01-01

    International audience; Up to now, the analytical calculation has been made only when the magnets own parallel magnetization directions. We have succeeded in two new results of first importance for the analytical calculation: the torque between two magnets, and the force components and torque when the magnetization directions are perpendicular. The last result allows the analytical calculation of the interactions when the magnetizations are in all the directions. The 3D analytical expressions...

  11. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    Science.gov (United States)

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  12. High-quality and interactive animations of 3D time-varying vector fields.

    Science.gov (United States)

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  13. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    Science.gov (United States)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  14. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    International Nuclear Information System (INIS)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutive models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations

  15. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, ... on the short- and/or long-term effects of 3-D digital products on eye and visual development, health, or function in children, nor are there persuasive, ...

  16. A state-of-the-art pipeline for postmortem CT and MRI visualization: from data acquisition to interactive image interpretation at autopsy

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Anders (Center for Medical Image Science and Visualization (CMIV), Univ. of Linkoeping, Linkoeping (Sweden); Dept. of Radiology of Medical and Health Sciences (IMH), Linkoeping Univ. Hospital, Linkoeping (Sweden)), email: anders.persson@cmiv.liu.se; Lindblom, Maria (Dept. of Radiology of Medical and Health Sciences (IMH), Linkoeping Univ. Hospital, Linkoeping (Sweden)); Jackowski, Christian (Inst. of Legal Medicine, Univ. of Zurich, Zurich (Switzerland))

    2011-06-15

    The importance of autopsy procedures leading to the establishment of the cause of death is well-known. A recent addition to the autopsy work flow is the possibility of conducting postmortem imaging, in its 3D version also called virtual autopsy (VA), using multidetector computed tomography (MDCT) or magnetic resonance imagining (MRI) data from scans of cadavers displayed with direct volume rendering (DVR) 3D techniques. The use of the data and their workflow are presented. Data acquisition was performed and high quality data-sets with submillimeter precision were acquired. New data acquisition techniques such as dual-energy CT (DECT) and quantitative MRI, then were implemented and provided additional information. Particular findings hardly visualized in conventional autopsy can rather easy be seen at the full body CT, such as air distribution, e.g. pneumothorax, pneumopericardium, air embolism, and wound channels. MRI shows natural deaths such as myocardial infarctions. Interactive visualization of these 3D data-sets can provide valuable insight into the corpses and enables non-invasive diagnostic procedures. In postmortem CT imaging, not being limited by a patient depending radiation dose limit the data-sets can, however, be generated with such a high resolution that they become difficult to handle in today's archive retrieval and interactive visualization systems, specifically in the case of full body scans. To take full advantage of these new technologies the postmortem workflow needs to be tailored to the demands and opportunities that the new technologies allow

  17. A state-of-the-art pipeline for postmortem CT and MRI visualization: from data acquisition to interactive image interpretation at autopsy

    International Nuclear Information System (INIS)

    Persson, Anders; Lindblom, Maria; Jackowski, Christian

    2011-01-01

    The importance of autopsy procedures leading to the establishment of the cause of death is well-known. A recent addition to the autopsy work flow is the possibility of conducting postmortem imaging, in its 3D version also called virtual autopsy (VA), using multidetector computed tomography (MDCT) or magnetic resonance imagining (MRI) data from scans of cadavers displayed with direct volume rendering (DVR) 3D techniques. The use of the data and their workflow are presented. Data acquisition was performed and high quality data-sets with submillimeter precision were acquired. New data acquisition techniques such as dual-energy CT (DECT) and quantitative MRI, then were implemented and provided additional information. Particular findings hardly visualized in conventional autopsy can rather easy be seen at the full body CT, such as air distribution, e.g. pneumothorax, pneumopericardium, air embolism, and wound channels. MRI shows natural deaths such as myocardial infarctions. Interactive visualization of these 3D data-sets can provide valuable insight into the corpses and enables non-invasive diagnostic procedures. In postmortem CT imaging, not being limited by a patient depending radiation dose limit the data-sets can, however, be generated with such a high resolution that they become difficult to handle in today's archive retrieval and interactive visualization systems, specifically in the case of full body scans. To take full advantage of these new technologies the postmortem workflow needs to be tailored to the demands and opportunities that the new technologies allow

  18. NoSQL Based 3D City Model Management System

    Science.gov (United States)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  19. Interactive Terascale Particle Visualization

    Science.gov (United States)

    Ellsworth, David; Green, Bryan; Moran, Patrick

    2004-01-01

    This paper describes the methods used to produce an interactive visualization of a 2 TB computational fluid dynamics (CFD) data set using particle tracing (streaklines). We use the method introduced by Bruckschen et al. [2001] that pre-computes a large number of particles, stores them on disk using a space-filling curve ordering that minimizes seeks, and then retrieves and displays the particles according to the user's command. We describe how the particle computation can be performed using a PC cluster, how the algorithm can be adapted to work with a multi-block curvilinear mesh, and how the out-of-core visualization can be scaled to 296 billion particles while still achieving interactive performance on PG hardware. Compared to the earlier work, our data set size and total number of particles are an order of magnitude larger. We also describe a new compression technique that allows the lossless compression of the particles by 41% and speeds the particle retrieval by about 30%.

  20. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  1. Visual comfort of 3-D TV : models and measurements

    NARCIS (Netherlands)

    Lambooij, M.T.M.

    2012-01-01

    The embracing of 3-D movies by Hollywood and fast LCD panels finally enable the home consumer market to start successful campaigns to get 3-D movies and games in the comfort of the living room. By introducing three-dimensional television (3-D TV) and its desktop-counterpart for gaming and internet

  2. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  3. Query2Question: Translating Visualization Interaction into Natural Language.

    Science.gov (United States)

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  4. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    Science.gov (United States)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  5. 3dRPC: a web server for 3D RNA-protein structure prediction.

    Science.gov (United States)

    Huang, Yangyu; Li, Haotian; Xiao, Yi

    2018-04-01

    RNA-protein interactions occur in many biological processes. To understand the mechanism of these interactions one needs to know three-dimensional (3D) structures of RNA-protein complexes. 3dRPC is an algorithm for prediction of 3D RNA-protein complex structures and consists of a docking algorithm RPDOCK and a scoring function 3dRPC-Score. RPDOCK is used to sample possible complex conformations of an RNA and a protein by calculating the geometric and electrostatic complementarities and stacking interactions at the RNA-protein interface according to the features of atom packing of the interface. 3dRPC-Score is a knowledge-based potential that uses the conformations of nucleotide-amino-acid pairs as statistical variables and that is used to choose the near-native complex-conformations obtained from the docking method above. Recently, we built a web server for 3dRPC. The users can easily use 3dRPC without installing it locally. RNA and protein structures in PDB (Protein Data Bank) format are the only needed input files. It can also incorporate the information of interface residues or residue-pairs obtained from experiments or theoretical predictions to improve the prediction. The address of 3dRPC web server is http://biophy.hust.edu.cn/3dRPC. yxiao@hust.edu.cn.

  6. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    Science.gov (United States)

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  7. A 3D Visualization Method for Bladder Filling Examination Based on EIT

    Directory of Open Access Journals (Sweden)

    Wei He

    2012-01-01

    Full Text Available As the researches of electric impedance tomography (EIT applications in medical examinations deepen, we attempt to produce the visualization of 3D images of human bladder. In this paper, a planar electrode array system will be introduced as the measuring platform and a series of feasible methods are proposed to evaluate the simulated volume of bladder to avoid overfilling. The combined regularization algorithm enhances the spatial resolution and presents distinguishable sketch of disturbances from the background, which provides us with reliable data from inverse problem to carry on to the three-dimensional reconstruction. By detecting the edge elements and tracking down the lost information, we extract quantitative morphological features of the object from the noises and background. Preliminary measurements were conducted and the results showed that the proposed algorithm overcomes the defects of holes, protrusions, and debris in reconstruction. In addition, the targets' location in space and roughly volume could be calculated according to the grid of finite element of the model, and this feature was never achievable for the previous 2D imaging.

  8. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering; Die computerassistierte Operationsplanung in der Abdominalchirurgie des Kindes. 3D-Visualisierung mittels ''volume rendering'' in der MRT

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L. [Universitaetsklinikum Heidelberg (Germany). Kinderchirurgie; Troeger, J. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Schenk, J.P. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Universitaetsklinikum, Paediatrische Radiologie, Heidelberg (Germany)

    2006-08-15

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [German] Komplexe Operationen bei ausgepraegten pathologischen Veraenderungen anatomischer Strukturen des kindlichen Abdomens benoetigen eine exakte Operationsvorbereitung. 3D-Visualisierung und computerassistierte Operationsplanung anhand von CT-Daten finden fuer schwierige chirurgische Eingriffe bei Erwachsenen in zunehmendem Masse Anwendung. Aus strahlenhygienischen Gruenden und bei besserer Weichteildifferenzierung ist jedoch neben der Sonographie die Magnetresonanztomographie (MRT) bei Kindern das Diagnostikum der Wahl. Die 3D-Visualisierung dieser MRT-Daten ist dabei jedoch aufgrund vielfaeltiger Schwierigkeiten bisher nicht durchgefuehrt worden, obwohl sich das Gebiet embryonaler Fehlbildungen und Tumoren geradezu anbietet. Vorgestellt wird eine weiterentwickelte und an die Fragestellungen der abdominellen Kinderchirurgie angepasste, sehr leistungsstarke raycastingbasierte 3D-volume-rendering-Software (VG Studio Max 1

  9. High-order finite difference solution for 3D nonlinear wave-structure interaction

    DEFF Research Database (Denmark)

    Ducrozet, Guillaume; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2010-01-01

    This contribution presents our recent progress on developing an efficient fully-nonlinear potential flow model for simulating 3D wave-wave and wave-structure interaction over arbitrary depths (i.e. in coastal and offshore environment). The model is based on a high-order finite difference scheme O...

  10. 3D Space Shift from CityGML LoD3-Based Multiple Building Elements to a 3D Volumetric Object

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2017-01-01

    Full Text Available In contrast with photorealistic visualizations, urban landscape applications, and building information system (BIM, 3D volumetric presentations highlight specific calculations and applications of 3D building elements for 3D city planning and 3D cadastres. Knowing the precise volumetric quantities and the 3D boundary locations of 3D building spaces is a vital index which must remain constant during data processing because the values are related to space occupation, tenure, taxes, and valuation. To meet these requirements, this paper presents a five-step algorithm for performing a 3D building space shift. This algorithm is used to convert multiple building elements into a single 3D volumetric building object while maintaining the precise volume of the 3D space and without changing the 3D locations or displacing the building boundaries. As examples, this study used input data and building elements based on City Geography Markup Language (CityGML LoD3 models. This paper presents a method for 3D urban space and 3D property management with the goal of constructing a 3D volumetric object for an integral building using CityGML objects, by fusing the geometries of various building elements. The resulting objects possess true 3D geometry that can be represented by solid geometry and saved to a CityGML file for effective use in 3D urban planning and 3D cadastres.

  11. Detecting and visualizing internal 3D oleoresin in agarwood by means of micro-computed tomography

    International Nuclear Information System (INIS)

    Khairiah Yazid; Roslan Yahya; Mat Rosol Awang

    2012-01-01

    Detection and analysis of oleoresin is particularly significant since the commercial value of agarwood is related to the quantity of oleoresins that are present. A modern technique of non-destructive may reach the interior region of the wood. Currently, tomographic image data in particular is most commonly visualized in three dimensions using volume rendering. The aim of this paper is to explore the potential of high resolution non-destructive 3D visualization technique, X-ray micro-computed tomography, as imaging tools to visualize micro-structure oleoresin in agarwood. Investigations involving desktop X-ray micro-tomography system on high grade agarwood sample, performed at the Centre of Tomography in Nuclear Malaysia, demonstrate the applicability of the method. Prior to experiments, a reference test was conducted to stimulate the attenuation of oleoresin in agarwood. Based on the experiment results, micro-CT imaging with voxel size 7.0 μm is capable to of detecting oleoresin and pores in agarwood. This imaging technique, although sophisticated can be used for standard development especially in grading of agarwood for commercial activities. (author)

  12. A Virtual Rock Physics Laboratory Through Visualized and Interactive Experiments

    Science.gov (United States)

    Vanorio, T.; Di Bonito, C.; Clark, A. C.

    2014-12-01

    As new scientific challenges demand more comprehensive and multidisciplinary investigations, laboratory experiments are not expected to become simpler and/or faster. Experimental investigation is an indispensable element of scientific inquiry and must play a central role in the way current and future generations of scientist make decisions. To turn the complexity of laboratory work (and that of rocks!) into dexterity, engagement, and expanded learning opportunities, we are building an interactive, virtual laboratory reproducing in form and function the Stanford Rock Physics Laboratory, at Stanford University. The objective is to combine lectures on laboratory techniques and an online repository of visualized experiments consisting of interactive, 3-D renderings of equipment used to measure properties central to the study of rock physics (e.g., how to saturate rocks, how to measure porosity, permeability, and elastic wave velocity). We use a game creation system together with 3-D computer graphics, and a narrative voice to guide the user through the different phases of the experimental protocol. The main advantage gained in employing computer graphics over video footage is that students can virtually open the instrument, single out its components, and assemble it. Most importantly, it helps describe the processes occurring within the rock. These latter cannot be tracked while simply recording the physical experiment, but computer animation can efficiently illustrate what happens inside rock samples (e.g., describing acoustic waves, and/or fluid flow through a porous rock under pressure within an opaque core-holder - Figure 1). The repository of visualized experiments will complement lectures on laboratory techniques and constitute an on-line course offered through the EdX platform at Stanford. This will provide a virtual laboratory for anyone, anywhere to facilitate teaching/learning of introductory laboratory classes in Geophysics and expand the number of courses

  13. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    Science.gov (United States)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  14. Anisotropic Diffusion based Brain MRI Segmentation and 3D Reconstruction

    OpenAIRE

    M. Arfan Jaffar; Sultan Zia; Ghaznafar Latif; AnwarM. Mirza; Irfan Mehmood; Naveed Ejaz; Sung Wook Baik

    2012-01-01

    In medical field visualization of the organs is very imperative for accurate diagnosis and treatment of any disease. Brain tumor diagnosis and surgery also required impressive 3D visualization of the brain to the radiologist. Detection and 3D reconstruction of brain tumors from MRI is a computationally time consuming and error-prone task. Proposed system detects and presents a 3D visualization model of the brain and tumor inside which greatly helps the radiologist to effectively diagnose and ...

  15. Scientific Visualization Made Easy for the Scientist

    Science.gov (United States)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the

  16. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    Science.gov (United States)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  17. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    Science.gov (United States)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  18. Bringing VR and spatial 3D interaction to the masses through video games.

    Science.gov (United States)

    LaViola, Joseph J

    2008-01-01

    This article examines why innovations such as the Sony EyeToy and Nintendo Wii have been so successful and discusses the research opportunities presented by the latest commercial push for spatial 3D interaction in games.

  19. Advances in visual representation of molecular potentials.

    Science.gov (United States)

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  20. Effect of 3D fractal dimension on contact area and asperity interactions in elastoplastic contact

    Directory of Open Access Journals (Sweden)

    Abdeljalil Jourani

    2016-05-01

    Full Text Available Few models are devoted to investigate the effect of 3D fractal dimension Ds on contact area and asperity interactions. These models used statistical approaches or two-dimensional deterministic simulations without considering the asperity interactions and elastic–plastic transition regime. In this study, a complete 3D deterministic model is adopted to simulate the contact between fractal surfaces which are generated using a modified two-variable Weierstrass–Mandelbrot function. This model incorporates the asperity interactions and considers the different deformation modes of surface asperities which range from entirely elastic through elastic-plastic to entirely plastic contact. The simulations reveal that the elastoplastic model is more appropriate to calculate the contact area ratio and pressure field. It is also shown that the influence of the asperity interactions cannot be neglected, especially at lower fractal dimension Ds and higher load.