WorldWideScience

Sample records for high-level 3d visualization

  1. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  2. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  3. 3-D Mapping Technologies For High Level Waste Tanks

    International Nuclear Information System (INIS)

    Marzolf, A.; Folsom, M.

    2010-01-01

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  4. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  5. Interactive 3D Mars Visualization

    Science.gov (United States)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  6. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  7. 3D Shape Modeling Using High Level Descriptors

    DEFF Research Database (Denmark)

    Andersen, Vedrana

    features like thorns, bark and scales. Presented here is a simple method for easy modeling, transferring and editing that kind of texture. The method is an extension of the height-field texture, but incorporates an additional tilt of the height field. Related to modeling non-heightfield textures, a part...... of my work involved developing feature-aware resizing of models with complex surfaces consisting of underlying shape and a distinctive texture detail. The aim was to deform an object while preserving the shape and size of the features.......The goal of this Ph.D. project is to investigate and improve the methods for describing the surface of 3D objects, with focus on modeling geometric texture on surfaces. Surface modeling being a large field of research, the work done during this project concentrated around a few smaller areas...

  8. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  9. 3D Visualization for Planetary Missions

    Science.gov (United States)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  10. Enhancing Nuclear Training with 3D Visualization

    International Nuclear Information System (INIS)

    Gagnon, V.; Gagnon, B.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  11. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  12. 3D Visualization Development of SIUE Campus

    Science.gov (United States)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  13. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  14. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  15. Java 3D Interactive Visualization for Astrophysics

    Science.gov (United States)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  16. Amazing Space: Explanations, Investigations, & 3D Visualizations

    Science.gov (United States)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  17. 3D Visualization of Global Ocean Circulation

    Science.gov (United States)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  18. An overview of 3D software visualization.

    Science.gov (United States)

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  19. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  20. Immersive 3D Visualization of Astronomical Data

    Science.gov (United States)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  1. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  2. 3D Flow visualization in virtual reality

    Science.gov (United States)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  3. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  4. Participation and 3D Visualization Tools

    DEFF Research Database (Denmark)

    Mullins, Michael; Jensen, Mikkel Holm; Henriksen, Sune

    2004-01-01

    With a departure point in a workshop held at the VR Media Lab at Aalborg University , this paper deals with aspects of public participation and the use of 3D visualisation tools. The workshop grew from a desire to involve a broad collaboration between the many actors in the city through using new...... perceptions of architectural representation in urban design where 3D visualisation techniques are used. It is the authors? general finding that, while 3D visualisation media have the potential to increase understanding of virtual space for the lay public, as well as for professionals, the lay public require...

  5. 3D VISUALIZATION FOR VIRTUAL MUSEUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    M. Skamantzari

    2016-06-01

    Full Text Available The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  6. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh; Hadwiger, Markus; Ben Romdhane, Mohamed; Behzad, Ali Reza; Madhavan, Poornima; Nunes, Suzana Pereira

    2016-01-01

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore

  7. Diffractive optical element for creating visual 3D images.

    Science.gov (United States)

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-02

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc.

  8. 3D visualization of port simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  9. A STUDY ON USING 3D VISUALIZATION AND SIMULATION PROGRAM (OPTITEX 3D ON LEATHER APPAREL

    Directory of Open Access Journals (Sweden)

    Ork Nilay

    2016-05-01

    Full Text Available Leather is a luxury garment. Design, material, labor, fitting and time costs are very effective on the production cost of the consumer leather good. 3D visualization and simulation programs which are getting popular in textile industry can be used for material, labor and time saving in leather apparel. However these programs have a very limited use in leather industry because leather material databases are not sufficient as in textile industry. In this research, firstly material properties of leather and textile fabric were determined by using both textile and leather physical test methods, and interpreted and introduced in the program. Detailed measures of an experimental human body were measured from a 3D body scanner. An avatar was designed according to these measurements. Then a prototype dress was made by using Computer Aided Design-CAD program for designing the patterns. After the pattern making, OptiTex 3D visualization and simulation program was used to visualize and simulate the dresses. Additionally the leather and cotton fabric dresses were sewn in real life. Then the visual and real life dresses were compared and discussed. 3D virtual prototyping seems a promising potential in future manufacturing technologies by evaluating the fitting of garments in a simple and quick way, filling the gap between 3D pattern design and manufacturing, providing virtual demonstrations to customers.

  10. Highly Realistic 3D Presentation Agents with Visual Attention Capability

    NARCIS (Netherlands)

    Hoekstra, A; Prendinger, H.; Bee, N.; Heylen, Dirk K.J.; Ishizuka, M.

    2007-01-01

    This research proposes 3D graphical agents in the role of virtual presenters with a new type of functionality – the capability to process and respond to visual attention of users communicated by their eye movements. Eye gaze is an excellent clue to users’ attention, visual interest, and visual

  11. Enhancing Nuclear Newcomer Training with 3D Visualization Learning Tools

    International Nuclear Information System (INIS)

    Gagnon, V.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  12. Wearable Gaze Trackers: Mapping Visual Attention in 3D

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Stets, Jonathan Dyssel; Suurmets, Seidi

    2017-01-01

    gaze trackers allows respondents to move freely in any real world 3D environment, removing the previous restrictions. In this paper we propose a novel approach for processing visual attention of respondents using mobile wearable gaze trackers in a 3D environment. The pipeline consists of 3 steps...

  13. Integrating 3D Visualization and GIS in Planning Education

    Science.gov (United States)

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  14. Visualization of RELAP5-3D best estimate code

    International Nuclear Information System (INIS)

    Mesina, G.L.

    2004-01-01

    The Idaho National Engineering Laboratory has developed a number of nuclear plant analysis codes such as RELAP5-3D, SCDAP/RELAP5-3D, and FLUENT/RELAP5-3D that have multi-dimensional modeling capability. The output of these codes is very difficult to analyze without the aid of visualization tools. The RELAP5-3D Graphical User Interface (RGUI) displays these calculations on plant images, functional diagrams, graphs, and by other means. These representations of the data enhance the analysts' ability to recognize plant behavior visually and reduce the difficulty of analyzing complex three-dimensional models. This paper describes the Graphical User Interface system for the RELAP5-3D suite of Best Estimate codes. The uses of the Graphical User Interface are illustrated. Examples of user problems solved by use of this interface are given. (author)

  15. 3d visualization of atomistic simulations on every desktop

    Science.gov (United States)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  16. 3d visualization of atomistic simulations on every desktop

    International Nuclear Information System (INIS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-01-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given

  17. Visualizing planetary data by using 3D engines

    Science.gov (United States)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  18. Creating 3D visualizations of MRI data: A brief guide

    Science.gov (United States)

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  19. Computerized diagnostic data analysis and 3-D visualization

    International Nuclear Information System (INIS)

    Schuhmann, D.; Haubner, M.; Krapichler, C.; Englmeier, K.H.; Seemann, M.; Schoepf, U.J.; Gebicke, K.; Reiser, M.

    1998-01-01

    Purpose: To survey methods for 3D data visualization and image analysis which can be used for computer based diagnostics. Material and methods: The methods available are explained in short terms and links to the literature are presented. Methods which allow basic manipulation of 3D data are windowing, rotation and clipping. More complex methods for visualization of 3D data are multiplanar reformation, volume projections (MIP, semi-transparent projections) and surface projections. Methods for image analysis comprise local data transformation (e.g. filtering) and definition and application of complex models (e.g. deformable models). Results: Volume projections produce an impression of the 3D data set without reducing the data amount. This supports the interpretation of the 3D data set and saves time in comparison to any investigation which requires examination of all slice images. More advanced techniques for visualization, e.g. surface projections and hybrid rendering visualize anatomical information to a very detailed extent, but both techniques require the segmentation of the structures of interest. Image analysis methods can be used to extract these structures (e.g. an organ) from the image data. Discussion: At the present time volume projections are robust and fast enough to be used routinely. Surface projections can be used to visualize complex and presegmented anatomical features. (orig.) [de

  20. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    International Nuclear Information System (INIS)

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-01-01

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  1. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  2. Interactive 3D visualization for theoretical virtual observatories

    Science.gov (United States)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  3. Interactive 3D Visualization for Theoretical Virtual Observatories

    Science.gov (United States)

    Dykes, Tim; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-04-01

    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  4. 3D Stereo Visualization for Mobile Robot Tele-Guide

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  5. NECTAR: Simulation and Visualization in a 3D Collaborative Environment

    NARCIS (Netherlands)

    Law, Y.W.; Chan, K.Y.

    For simulation and visualization in a 3D collaborative environment, an architecture called the Nanyang Experimental CollaboraTive ARchitecture (NECTAR) has been developed. The objective is to support multi-user collaboration in a virtual environment with an emphasis on cost-effectiveness and

  6. Effects of 3D sound on visual scanning

    NARCIS (Netherlands)

    Veltman, J.A.; Bronkhorst, A.W.; Oving, A.B.

    2000-01-01

    An experiment was conducted in a flight simulator to explore the effectiveness of a 3D sound display as support to visual information from a head down display (HDD). Pilots had to perform two main tasks in separate conditions: intercepting and following a target jet. Performance was measured for

  7. How 3D immersive visualization is changing medical diagnostics

    Science.gov (United States)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  8. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  9. Storytelling in Interactive 3D Geographic Visualization Systems

    Directory of Open Access Journals (Sweden)

    Matthias Thöny

    2018-03-01

    Full Text Available The objective of interactive geographic maps is to provide geographic information to a large audience in a captivating and intuitive way. Storytelling helps to create exciting experiences and to explain complex or otherwise hidden relationships of geospatial data. Furthermore, interactive 3D applications offer a wide range of attractive elements for advanced visual story creation and offer the possibility to convey the same story in many different ways. In this paper, we discuss and analyze storytelling techniques in 3D geographic visualizations so that authors and developers working with geospatial data can use these techniques to conceptualize their visualization and interaction design. Finally, we outline two examples which apply the given concepts.

  10. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    Science.gov (United States)

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  11. 3D Planetary Data Visualization with CesiumJS

    Science.gov (United States)

    Larsen, K. W.; DeWolfe, A. W.; Nguyen, D.; Sanchez, F.; Lindholm, D. M.

    2017-12-01

    Complex spacecraft orbits and multi-instrument observations can be challenging to visualize with traditional 2D plots. To facilitate the exploration of planetary science data, we have developed a set of web-based interactive 3D visualizations for the MAVEN and MMS missions using the free CesiumJS library. The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN3D project allows playback of one day's orbit at a time, displaying the spacecraft's position and orientation. Selected science data sets can be overplotted on the orbit track, including vectors for magnetic field and ion flow velocities. We also provide an overlay the M-GITM model on the planet itself. MAVEN3D is available at the MAVEN public website at: https://lasp.colorado.edu/maven/sdc/public/pages/maven3d/ The Magnetospheric MultiScale Mission (MMS) consists of one hundred instruments on four spacecraft flying in formation around Earth, investigating the interactions between the solar wind and Earth's magnetic field. While the highest temporal resolution data isn't received and processed until later, continuous daily observations of the particle and field environments are made available as soon as they are received. Traditional `quick-look' static plots have long been the first interaction with data from a mission of this nature. Our new 3D Quicklook viewer allows data from all four spacecraft to be viewed in an interactive web application as soon as the data is ingested into the MMS Science Data Center, less than one day after collection, in order to better help identify scientifically interesting data.

  12. Realistic terrain visualization based on 3D virtual world technology

    Science.gov (United States)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  13. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  14. 3D computer visualization and animation of CANDU reactor core

    International Nuclear Information System (INIS)

    Qian, T.; Echlin, M.; Tonner, P.; Sur, B.

    1999-01-01

    Three-dimensional (3D) computer visualization and animation models of typical CANDU reactor cores (Darlington, Point Lepreau) have been developed using world-wide-web (WWW) browser based tools: JavaScript, hyper-text-markup language (HTML) and virtual reality modeling language (VRML). The 3D models provide three-dimensional views of internal control and monitoring structures in the reactor core, such as fuel channels, flux detectors, liquid zone controllers, zone boundaries, shutoff rods, poison injection tubes, ion chambers. Animations have been developed based on real in-core flux detector responses and rod position data from reactor shutdown. The animations show flux changing inside the reactor core with the drop of shutoff rods and/or the injection of liquid poison. The 3D models also provide hypertext links to documents giving specifications and historical data for particular components. Data in HTML format (or other format such as PDF, etc.) can be shown in text, tables, plots, drawings, etc., and further links to other sources of data can also be embedded. This paper summarizes the use of these WWW browser based tools, and describes the resulting 3D reactor core static and dynamic models. Potential applications of the models are discussed. (author)

  15. 2D/3D Visual Tracker for Rover Mast

    Science.gov (United States)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  16. 3D modeling and visualization software for complex geometries

    International Nuclear Information System (INIS)

    Guse, Guenter; Klotzbuecher, Michael; Mohr, Friedrich

    2011-01-01

    The reactor safety depends on reliable nondestructive testing of reactor components. For 100% detection probability of flaws and the determination of their size using ultrasonic methods the ultrasonic waves have to hit the flaws within a specific incidence and squint angle. For complex test geometries like testing of nozzle welds from the outside of the component these angular ranges can only be determined using elaborate mathematical calculations. The authors developed a 3D modeling and visualization software tool that allows to integrate and present ultrasonic measuring data into the 3D geometry. The software package was verified using 1:1 test samples (example: testing of the nozzle edge of the feedwater nozzle of a steam generator from the outside; testing of the reactor pressure vessel nozzle edge from the inside).

  17. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    Science.gov (United States)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  18. 3D Immersive Visualization: An Educational Tool in Geosciences

    Science.gov (United States)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  19. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  20. The 3D LAOKOON--Visual and Verbal in 3D Online Learning Environments.

    Science.gov (United States)

    Liestol, Gunnar

    This paper reports on a project where three-dimensional (3D) online gaming environments were exploited for the purpose of academic communication and learning. 3D gaming environments are media and meaning rich and can provide inexpensive solutions for educational purposes. The experiment with teaching and discussions in this setting, however,…

  1. Visualizing 3D data obtained from microscopy on the Internet.

    Science.gov (United States)

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  2. A workflow for the 3D visualization of meteorological data

    Science.gov (United States)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  3. 3D visualization of numeric planetary data using JMARS

    Science.gov (United States)

    Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.

    2013-12-01

    JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.

  4. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  5. Interactive Scientific Visualization in 3D Virtual Reality Model

    Directory of Open Access Journals (Sweden)

    Filip Popovski

    2016-11-01

    Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.

  6. 3D visualization and simulation to enhance nuclear learning

    International Nuclear Information System (INIS)

    Dimitri-Hakim, R.

    2012-01-01

    The nuclear power industry is facing a very real challenge that affects its day-to-day activities: a rapidly aging workforce. For New Nuclear Build (NNB) countries, the challenge is even greater, having to develop a completely new workforce with little to no prior experience or exposure to nuclear power. The workforce replacement introduces workers of a new generation with different backgrounds and affinities than its predecessors. Major lifestyle differences between the new and the old generation of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve high level of retention. (author)

  7. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  8. Visual comfort of 3-D TV : models and measurements

    NARCIS (Netherlands)

    Lambooij, M.T.M.

    2012-01-01

    The embracing of 3-D movies by Hollywood and fast LCD panels finally enable the home consumer market to start successful campaigns to get 3-D movies and games in the comfort of the living room. By introducing three-dimensional television (3-D TV) and its desktop-counterpart for gaming and internet

  9. 3D Visualization of Engendering Collaborative Leadership in the Space

    Directory of Open Access Journals (Sweden)

    Aini-Kristiina Jäppinen

    2012-12-01

    Full Text Available The paper focuses on collaborative leadership in education and how to illustrate its engendering process in a three-dimensional space. This complex and fluid process is examined as distributed and pedagogical within a Finnish vocational upper secondary educational organization. As a consequence, the notion of distributed pedagogical leadership is used when collaborative leadership in education is studied. Collaborative leadership is argued to consist of the innermost substance of a professional learning community, as attributes of a group of people working together for specific purposes. Therefore, collaborative leadership naturally involves actors, activities, and context. However, the innermost substance of the community is the crux of leadership. It is here presented in the form of ten "keys", as ten attributes with several operational nuances. The keys are highly interdependent and a movement in one of them has an effect both on every other key and the whole. Within this framework, the paper provides a presentation of selected study results by means of the 3D program Strata. The visualizations illustrate concrete examples of how the keys relate to the reality in the vocational education organization in question. For this, a novel analysis called Wave is used, based on natural laws and rules of physics.

  10. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    Science.gov (United States)

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  11. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    Science.gov (United States)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  12. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    Science.gov (United States)

    2013-06-28

    accurate tracking and identity associations of people’s motions in videos. Proxemics is a subfield of anthropology that involves study of people...cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium

  13. Development of 3-D Medical Image VIsualization System

    African Journals Online (AJOL)

    User

    uses standard 2-D medical imaging inputs and generates medical images of human body parts ... light wave from points on the 3-D object(s) in ... tools, and communication bandwidth cannot .... locations along the track that correspond with.

  14. Towards an Integrated Visualization Of Semantically Enriched 3D City Models: An Ontology of 3D Visualization Techniques

    OpenAIRE

    Métral, Claudine; Ghoula, Nizar; Falquet, Gilles

    2012-01-01

    3D city models - which represent in 3 dimensions the geometric elements of a city - are increasingly used for an intended wide range of applications. Such uses are made possible by using semantically enriched 3D city models and by presenting such enriched 3D city models in a way that allows decision-making processes to be carried out from the best choices among sets of objectives, and across issues and scales. In order to help in such a decision-making process we have defined a framework to f...

  15. Visual fatigue while watching 3D stimuli from different positions

    Directory of Open Access Journals (Sweden)

    J. Antonio Aznar-Casanova

    2017-07-01

    Conclusion: This results support a mixed model, combining a model based on the visual angle (related to viewing distance and another based on the oculomotor imbalance (related to visual direction. This mixed model could help to predict the distribution of seats in the cinema room ranging from those that produce greater visual comfort to those that produce more visual discomfort. Also could be a first step to pre-diagnosis of binocular vision disorders.

  16. Haptic and Visual feedback in 3D Audio Mixing Interfaces

    DEFF Research Database (Denmark)

    Gelineck, Steven; Overholt, Daniel

    2015-01-01

    This paper describes the implementation and informal evaluation of a user interface that explores haptic feedback for 3D audio mixing. The implementation compares different approaches using either the LEAP Motion for mid-air hand gesture control, or the Novint Falcon for active haptic feed- back...

  17. Gipsy 3D : Analysis, Visualization and Vo-Tools

    NARCIS (Netherlands)

    Ruiz, J. E.; Santander-Vela, J. D.; Espigares, V.; Verdes-Montenegro, L.; Hulst, J. M. van der

    2009-01-01

    The scientific goals of the AMIGA project are based on the analysis of a significant amount of spectroscopic 3D data. In order to perform this work we present an initiative to develop a new VO compliant package, including present core applications and tasks offered by the Groningen Image Processing

  18. The role of visual grammar in online 3D games

    DEFF Research Database (Denmark)

    Nobaew, Banphot

    2011-01-01

    theoretical framework of visual language and therefore analyzes the elements of visual design in online three-dimension games. Of course, there are some theoretical frameworks which have been applied to existing media, such as media studies. These have been used to analyze printed media, images, arts...

  19. 3D Visual Data Mining: goals and experiences

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Eriksen, Poul Svante

    2003-01-01

    , statistical analyses, perceptual and cognitive psychology, and scientific visualization. At the conceptual level we offer perceptual and cognitive insights to guide the information visualization process. We then choose cluster surfaces to exemplify the data mining process, to discuss the tasks involved...

  20. 3D panorama stereo visual perception centering on the observers

    International Nuclear Information System (INIS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-01-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality. (paper)

  1. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  2. Visualization of the NASA ICON mission in 3d

    Science.gov (United States)

    Mendez, R. A., Jr.; Immel, T. J.; Miller, N.

    2016-12-01

    The ICON Explorer mission (http://icon.ssl.berkeley.edu) will provide several data products for the atmosphere and ionosphere after its launch in 2017. This project will support the mission by investigating the capability of these tools for visualization of current and predicted observatory characteristics and data acquisition. Visualization of this mission can be accomplished using tools like Google Earth or CesiumJS, as well assistance from Java or Python. Ideally we will bring this visualization into the homes of people without the need of additional software. The path of launching a standalone website, building this environment, and a full toolkit will be discussed. Eventually, the initial work could lead to the addition of a downloadable visualization packages for mission demonstration or science visualization.

  3. Web based 3-D medical image visualization on the PC.

    Science.gov (United States)

    Kim, N; Lee, D H; Kim, J H; Kim, Y; Cho, H J

    1998-01-01

    With the recent advance of Web and its associated technologies, information sharing on distribute computing environments has gained a great amount of attention from many researchers in many application areas, such as medicine, engineering, and business. One basic requirement of distributed medical consultation systems is that geographically dispersed, disparate participants are allowed to exchange information readily with each other. Such software also needs to be supported on a broad range of computer platforms to increase the softwares accessibility. In this paper, the development of world-wide-web based medical consultation system for radiology imaging is addressed to provide platform independence and greater accessibility. The system supports sharing of 3-dimensional objects. We use VRML (Virtual Reality Modeling Language), which is the defacto standard in 3-D modeling on the Web. 3-D objects are reconstructed from CT or MRI volume data using a VRML format, which can be viewed and manipulated easily in Web-browsers with a VRML plug-in. A Marching cubes method is used in the transformation of scanned volume data sets to polygonal surfaces of VRML. A decimation algorithm is adopted to reduce the number of meshes in the resulting VRML file. 3-D volume data are often very large in size, hence loading the data on PC level computers requires a significant reduction of the size of the data, while minimizing the loss of the original shape information. This is also important to decrease network delays. A prototype system has been implemented (http://cybernet5.snu.ac.kr/-cyber/mrivrml .html), and several sessions of experiments are carried out.

  4. 2-D and 3-D Visualization of Many-to-Many Relationships

    Directory of Open Access Journals (Sweden)

    SeungJin Lim

    2017-08-01

    Full Text Available With the unprecedented wave of Big Data, the importance of information visualization is catching greater momentum. Understanding the underlying relationships between constituent objects is becoming a common task in every branch of science, and visualization of such relationships is a critical part of data analysis. While the techniques for the visualization of binary relationships are widespread, visualization techniques for ternary or higher relationships are lacking. In this paper, we propose a 3-D visualization primitive which is suitable for such relationships. The design goals of the primitive are discussed, and the effectiveness of the proposed visual primitive with respect to information communication is demonstrated in a 3-D visualization environment.

  5. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  6. Haptic perception disambiguates visual perception of 3D shape

    NARCIS (Netherlands)

    Wijntjes, Maarten W A; Volcic, Robert; Pont, Sylvia C.; Koenderink, Jan J.; Kappers, Astrid M L

    We studied the influence of haptics on visual perception of three-dimensional shape. Observers were shown pictures of an oblate spheroid in two different orientations. A gauge-figure task was used to measure their perception of the global shape. In the first two sessions only vision was used. The

  7. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...

  8. Visual attention: low-level and high-level viewpoints

    Science.gov (United States)

    Stentiford, Fred W. M.

    2012-06-01

    This paper provides a brief outline of the approaches to modeling human visual attention. Bottom-up and top-down mechanisms are described together with some of the problems that they face. It has been suggested in brain science that memory functions by trading measurement precision for associative power; sensory inputs from the environment are never identical on separate occasions, but the associations with memory compensate for the differences. A graphical representation for image similarity is described that relies on the size of maximally associative structures (cliques) that are found to reflect between pairs of images. This is applied to the recognition of movie posters, the location and recognition of characters, and the recognition of faces. The similarity mechanism is shown to model popout effects when constraints are placed on the physical separation of pixels that correspond to nodes in the maximal cliques. The effect extends to modeling human visual behaviour on the Poggendorff illusion.

  9. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    Science.gov (United States)

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  10. New techniques in 3D scalar and vector field visualization

    International Nuclear Information System (INIS)

    Max, N.; Crawfis, R.; Becker, B.

    1993-01-01

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ''splatting'' scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ''flow volume'' of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity

  11. New techniques in 3D scalar and vector field visualization

    Energy Technology Data Exchange (ETDEWEB)

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  12. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Bowman, S.M.

    2000-01-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system

  13. 3D flow visualizations by means of laser beam sweeps

    International Nuclear Information System (INIS)

    Prenel, J.P.; Porcar, R.; Diemunsch, G.

    1987-01-01

    A method in which two-dimensional aperiodic or periodic sweeps are used to produce three-dimensional light sweeps makes possible the quasi-simultaneous recording of different specific planes of a flow, or the characterization of a fluid without revolution symmetry. The optical device consists of two scanners (whose axes are orthogonal) set into a telescope, allowing fine focusing of the light sheets in the study zone. The method also allows visualizations on skewed surfaces, particularly those of flows without a cylindrical geometry; it is applicable from low velocity, as in heat convection, to supersonic velocity, as in the analysis of a nonaxisymmetric ejector. 8 references

  14. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    Science.gov (United States)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  15. Visualizing the process of interaction in a 3D environment

    Science.gov (United States)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  16. Virtual inspector: a flexible visualizer for dense 3D scanned models

    OpenAIRE

    Callieri, Marco; Ponchio, Federico; Cignoni, Paolo; Scopigno, Roberto

    2008-01-01

    The rapid evolution of automatic shape acquisition technologies will make huge amount of sampled 3D data available in the near future. Cul- tural Heritage (CH) domain is one of the ideal fields of application of 3D scanned data, while some issues in the use of those data are: how to visualize at interactive rates and full quality on commodity computers; how to improve visualization ease of use; how to support the integrated visualization of a virtual 3D artwork and the multimedia data which t...

  17. Visualization of cranial nerves by MR cisternography using 3D FASE. Comparison with 2D FSE

    Energy Technology Data Exchange (ETDEWEB)

    Asakura, Hirofumi; Nakano, Satoru; Togami, Taro [Kagawa Medical School, Miki (Japan)] (and others)

    2001-03-01

    MR cisternography using 3D FASE was compared with that of 2D FSE in regard to visualization of normal cranial nerves. In a phantom study, contrast-to-noise ratio (C/N) of fine structures was better in 3D FASE images than in 2D FSE. In clinical cases, visualization of trigeminal nerve, abducent nerve, and facial/vestibulo-cochlear nerve were evaluated. Each cranial nerve was visualized better in 3D FASE images than in 2D FSE, with a significant difference (p<0.05). (author)

  18. Visualization research of 3D radiation field based on Delaunay triangulation

    International Nuclear Information System (INIS)

    Xie Changji; Chen Yuqing; Li Shiting; Zhu Bo

    2011-01-01

    Based on the characteristics of the three dimensional partition, the triangulation of discrete date sets is improved by the method of point-by-point insertion. The discrete data for the radiation field by theoretical calculation or actual measurement is restructured, and the continuous distribution of the radiation field data is obtained. Finally, the 3D virtual scene of the nuclear facilities is built with the VR simulation techniques, and the visualization of the 3D radiation field is also achieved by the visualization mapping techniques. It is shown that the method combined VR and Delaunay triangulation could greatly improve the quality and efficiency of 3D radiation field visualization. (authors)

  19. Visualization of cranial nerves by MR cisternography using 3D FASE. Comparison with 2D FSE

    International Nuclear Information System (INIS)

    Asakura, Hirofumi; Nakano, Satoru; Togami, Taro

    2001-01-01

    MR cisternography using 3D FASE was compared with that of 2D FSE in regard to visualization of normal cranial nerves. In a phantom study, contrast-to-noise ratio (C/N) of fine structures was better in 3D FASE images than in 2D FSE. In clinical cases, visualization of trigeminal nerve, abducent nerve, and facial/vestibulo-cochlear nerve were evaluated. Each cranial nerve was visualized better in 3D FASE images than in 2D FSE, with a significant difference (p<0.05). (author)

  20. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  1. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent

    2014-01-01

    , we propose an alternative appearance-driven approach which rst extracts 2D primitives justi ed by Marr's primal sketch, which are \\accumulated" over multiple views and the most stable ones are \\promoted" to 3D visual primitives. The 3D promoted primitives represent both structure and appearance...

  2. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    Science.gov (United States)

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  3. Savage Modeling and Analysis Language (SMAL): Metadata for Tactical Simulations and X3D Visualizations

    National Research Council Canada - National Science Library

    Rauch, Travis M

    2006-01-01

    Visualizing operations environments in three-dimensions is in keeping with the military's drive to increase the speed and accuracy with which warfighters make decisions in the command center and in the field. Three-dimensional (3D...

  4. Matlab script for 3D visualizing geodata on a rotating globe

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef

    2013-01-01

    Roč. 56, July (2013), s. 127-130 ISSN 0098-3004 Institutional support: RVO:67985815 Keywords : 3D visualization * geoid height * elevation mode Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.562, year: 2013

  5. Integration of Notification with 3D Visualization of Rover Operations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D visualization has proven effective at orienting remote ground controllers about robots operating on a planetary surface. Using such displays, controllers can...

  6. 3D MODELLING AND VISUALIZATION BASED ON THE UNITY GAME ENGINE – ADVANTAGES AND CHALLENGES

    Directory of Open Access Journals (Sweden)

    I. Buyuksalih

    2017-11-01

    Full Text Available 3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine as highlighted in this paper.

  7. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    Science.gov (United States)

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  8. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    Science.gov (United States)

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied

  9. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    Science.gov (United States)

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  10. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    Science.gov (United States)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  11. Development and application of visual support module for remote operator in 3D virtual environment

    International Nuclear Information System (INIS)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo; Bae, Chang Hyun

    2006-02-01

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module

  12. Development and application of visual support module for remote operator in 3D virtual environment

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo [Cheju Nat. Univ., Jeju (Korea, Republic of); Bae, Chang Hyun [Pusan Nat. Univ., Busan (Korea, Republic of)

    2006-02-15

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module.

  13. Visualizing measurement for 3D smooth density distributions by means of linear programming

    International Nuclear Information System (INIS)

    Tayama, Norio; Yang, Xue-dong

    1994-01-01

    This paper is concerned with a theoretical possibility of a new visualizing measurement method based on an optimum 3D reconstruction from a few selected projections. A theory of optimum 3D reconstruction by a linear programming is discussed, utilizing a few projections for sampled 3D smooth-density-distribution model which satisfies the condition of the 3D sampling theorem. First by use of the sampling theorem, it is shown that we can set up simultaneous simple equations which corresponds to the case of the parallel beams. Then we solve the simultaneous simple equations by means of linear programming algorithm, and we can get an optimum 3D density distribution images with minimum error in the reconstruction. The results of computer simulation with the algorithm are presented. (author)

  14. Shape Perception in 3-D Scatterplots Using Constant Visual Angle Glyphs

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2012-01-01

    When viewing 3-D scatterplots in immersive virtual environments, one commonly encountered problem is the presence of clutter, which obscures the view of any structures of interest in the visualization. In order to solve this problem, we propose to render the 3-D glyphs such that they always cover...... to regular perspective glyphs, especially when a large amount of clutter is present. Furthermore, our evaluation revealed that perception of structures in 3-D scatterplots is significantly affected by the volumetric density of the glyphs in the plot....

  15. vrmlgen: An R Package for 3D Data Visualization on the Web

    Directory of Open Access Journals (Sweden)

    Enrico Glaab

    2010-10-01

    Full Text Available The 3-dimensional representation and inspection of complex data is a frequently used strategy in many data analysis domains. Existing data mining software often lacks functionality that would enable users to explore 3D data interactively, especially if one wishes to make dynamic graphical representations directly viewable on the web.In this paper we present vrmlgen, a software package for the statistical programming language R to create 3D data visualizations in web formats like the Virtual Reality Markup Language (VRML and LiveGraphics3D. vrmlgen can be used to generate 3D charts and bar plots, scatter plots with density estimation contour surfaces, and visualizations of height maps, 3D object models and parametric functions. For greater flexibility, the user can also access low-level plotting methods through a unified interface and freely group different function calls together to create new higher-level plotting methods. Additionally, we present a web tool allowing users to visualize 3D data online and test some of vrmlgen's features without the need to install any software on their computer.

  16. Characteristics of visual fatigue under the effect of 3D animation.

    Science.gov (United States)

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  17. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    Directory of Open Access Journals (Sweden)

    Claudia eHänel

    2014-05-01

    Full Text Available The visualization of the progression of brain tissue loss, which occurs in neurodegenerative diseases like corticobasal syndrome (CBS, is an important prerequisite to understand the course and the causes of this neurodegenerative disorder. Common workflows for visual analysis are often based on single 2D sections since in 3D visualizations more internally situated structures may be occluded by structures near the surface. The reduction of dimensions from 3D to 2D allows for an holistic view onto internal and external structures, but results in a loss of spatial information. Here, we present an application with two 3D visualization designs to resolve these challenges. First, in addition to the volume changes, the semi-transparent anatomy is displayed with an anatomical section and cortical areas for spatial orientation. Second, the principle of importance-driven volume rendering is adapted to give an unrestricted line-of-sight to relevant structures by means of a frustum-like cutout. To strengthen the benefits of the 3D visualization, we decided to provide the application next to standard desktop environments in immersive virtual environments with stereoscopic viewing as well. This improves the depth perception in general and in particular for the second design. Thus, the application presented in this work allows for aneasily comprehensible visual analysis of the extent of brain degeneration and the corresponding affected regions.

  18. High-level face shape adaptation depends on visual awareness : Evidence from continuous flash suppression

    NARCIS (Netherlands)

    Stein, T.; Sterzer, P.

    When incompatible images are presented to the two eyes, one image dominates awareness while the other is rendered invisible by interocular suppression. It has remained unclear whether complex visual information can reach high-level processing stages in the ventral visual pathway during such

  19. Visualization of the lower cranial nerves by 3D-FIESTA

    International Nuclear Information System (INIS)

    Okumura, Yusuke; Suzuki, Masayuki; Takemura, Akihiro; Tsujii, Hideo; Kawahara, Kazuhiro; Matsuura, Yukihiro; Takada, Tadanori

    2005-01-01

    MR cisternography has been introduced for use in neuroradiology. This method is capable of visualizing tiny structures such as blood vessels and cranial nerves in the cerebrospinal fluid (CSF) space because of its superior contrast resolution. The cranial nerves and small vessels are shown as structures of low intensity surrounded by marked hyperintensity of the CSF. In the present study, we evaluated visualization of the lower cranial nerves (glossopharyngeal, vagus, and accessory) by the three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) sequence and multiplanar reformation (MPR) technique. The subjects were 8 men and 3 women, ranging in age from 21 to 76 years (average, 54 yeas). We examined the visualization of a total of 66 nerves in 11 subjects by 3D-FIESTA. The results were classified into four categories ranging from good visualization to non-visualization. In all cases, all glossopharyngeal and vagus nerves were identified to some extent, while accessory nerves were visualized either partially or entirely in only 16 cases. The total visualization rate was about 91%. In conclusion, 3D-FIESTA may be a useful method for visualization of the lower cranial nerves. (author)

  20. Interactive WebGL-based 3D visualizations for EAST experiment

    International Nuclear Information System (INIS)

    Xia, J.Y.; Xiao, B.J.; Li, Dan; Wang, K.R.

    2016-01-01

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  1. Interactive WebGL-based 3D visualizations for EAST experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, K.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  2. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  3. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    Directory of Open Access Journals (Sweden)

    Bhavnani Suresh K

    2010-11-01

    Full Text Available Abstract Background In a recent study, two-dimensional (2D network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method revealed that genes implicated in many diseases (non-specific genes tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks.

  4. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    Science.gov (United States)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  5. Human tooth pulp anatomy visualization by 3D magnetic resonance microscopy

    International Nuclear Information System (INIS)

    Sustercic, Dusan; Sersa, Igor

    2012-01-01

    Precise assessment of dental pulp anatomy is of an extreme importance for a successful endodontic treatment. As standard radiographs of teeth provide very limited information on dental pulp anatomy, more capable methods are highly appreciated. One of these is 3D magnetic resonance (MR) microscopy of which diagnostic capabilities in terms of a better dental pulp anatomy assessment were evaluated in the study. Twenty extracted human teeth were scanned on a 2.35 T MRI system for MR microscopy using the 3D spin-echo method that enabled image acquisition with isotropic resolution of 100 μm. The 3D images were then post processed by ImageJ program (NIH) to obtain advanced volume rendered views of dental pulps. MR microscopy at 2.35 T provided accurate data on dental pulp anatomy in vitro. The data were presented as a sequence of thin 2D slices through the pulp in various orientations or as volume rendered 3D images reconstructed form arbitrary view-points. Sequential 2D images enabled only an approximate assessment of the pulp, while volume rendered 3D images were more precise in visualization of pulp anatomy and clearly showed pulp diverticles, number of pulp canals and root canal anastomosis. This in vitro study demonstrated that MR microscopy could provide very accurate 3D visualization of dental pulp anatomy. A possible future application of the method in vivo may be of a great importance for the endodontic treatment

  6. Memory and visual search in naturalistic 2D and 3D environments.

    Science.gov (United States)

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.

  7. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    Science.gov (United States)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  8. 3D visualization of geo-scientific data for research and development purposes

    International Nuclear Information System (INIS)

    Mangeot, A.; Tabani, P.; Yven, B.; Dewonck, S.; Napier, B.; Waston, C.J.; Baker, G.R.; Shaw, R.P.

    2012-01-01

    Document available in extended abstract form only. In recent years national geoscience organizations have increasingly utilized 3D model data as an output to the stakeholder community. Advances in both software and hardware have led to an increasing use of 3D depictions of geoscience data alongside the standard 2D data formats such as maps and GIS data. By characterizing geoscience data in 3D, knowledge transfer between geo-scientists and stakeholders is improved as the mindset and thought processes are communicated more effectively in a 3D model than in a 2D flat file format. 3D models allow the user to understand the conceptual basis of the 2D data and aids the decision making process at local, regional and national scales. In April 29 2009 a Memorandum of Understanding has been signed between BGS and Andra in order to provide an improved mechanism for technical cooperation and collaboration in the Earth sciences. A specific agreement was signed the 1 December 2009 to evaluate the capacity of a 3D software called GeoVisionary to represent the Underground research Laboratory and its environment. GeoVisionary is the result of collaboration between Virtalis and the British Geological Survey. Combining a powerful data engine with a virtual geological tool-kit enables geo-scientists to visualize, analyze and share large datasets seamlessly in an immersive, real time environment A typical GeoVisionary environment contains one or more the following: 3D terrain files, Aerial photography, Bitmap overlays of specialized data, Vector shapes and outlines, 3D object Models. The key benefits are: Continuously stream geometry and photography in real time, Visualise 2D GIS data in immersive 3D stereo, Diverse datasets in a single environment, 'Fly' to any part of the data in seconds, Infinitely scalable, Prepare and evaluate before you begin fieldwork, Enhance team-working and increased efficiency of field operations, Clearer communication of results. Now, the 3D model has been

  9. MRI segmentation by active contours model, 3D reconstruction, and visualization

    Science.gov (United States)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  10. 3D-visualization by MRI for surgical planning of Wilms tumors

    International Nuclear Information System (INIS)

    Schenk, J.P.; Wunsch, R.; Jourdan, C.; Troeger, J.; Waag, K.-L.; Guenther, P.; Graf, N.; Behnisch, W.

    2004-01-01

    Purpose: To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. Materials and Methods: In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4-6 mm slices. Additionally, phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. Results: In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. Conclusion: For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected. (orig.)

  11. [3D-visualization by MRI for surgical planning of Wilms tumors].

    Science.gov (United States)

    Schenk, J P; Waag, K-L; Graf, N; Wunsch, R; Jourdan, C; Behnisch, W; Tröger, J; Günther, P

    2004-10-01

    To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4 - 6 mm slices. Additionally, a phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected.

  12. Mental practice with interactive 3D visual aids enhances surgical performance.

    Science.gov (United States)

    Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo

    2017-10-01

    Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.

  13. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    Science.gov (United States)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  14. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    Science.gov (United States)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  15. 3D visualization of a resistivity data set - an example from a sludge disposal site

    International Nuclear Information System (INIS)

    Bernstone, C.; Dahlin, T.; Jonsson, P.

    1997-01-01

    A relatively large 2D inverted CVES resistivity data set from a waste pond area in southern Sweden was visualized as an animated 3D model using state-of-the-art techniques and tools. The presentation includes a description of the hardware and software used, outline of the case study and examples of scenes from the animation

  16. Putting it in perspective: designing a 3D visualization to contextualize indigenous knowledge in rural Namibia

    DEFF Research Database (Denmark)

    Jensen, Kasper L; Winschiers-Theophilus, Heike; Rodil, Kasper

    2012-01-01

    As part of a long-term research and co-design project we are creating a 3D visualization interface for an indigenous knowledge (IK) management system with rural dwellers of the Herero tribe in Namibia. Evaluations of earlier prototypes and theories on cultural differences in perception led us...

  17. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    Science.gov (United States)

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  18. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  19. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    Directory of Open Access Journals (Sweden)

    Teresa eSollfrank

    2015-08-01

    Full Text Available A repetitive movement practice by motor imagery (MI can influence motor cortical excitability in the electroencephalogram (EEG. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007. This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during motor imagery. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronisation (ERD of the upper alpha band (10-12 Hz over the sensorimotor cortices thereby potentially improving MI based BCI protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb motor imagery present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (2D vs. 3D. The largest upper alpha band power decrease was obtained during motor imagery after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D visualization modality group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during MI. Realistic visual feedback, consistent with the participant’s motor imagery, might be helpful for accomplishing successful motor imagery and the use of such feedback may assist in making BCI a more natural interface for motor imagery based BCI rehabilitation.

  20. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,' ' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  1. On 3D Geo-visualization of a Mine Surface Plant and Mine Roadway

    Institute of Scientific and Technical Information of China (English)

    WANG Yunjia; FU Yongming; FU Erjiang

    2007-01-01

    Constructing the 3D virtual scene of a coal mine is the objective requirement for modernizing and processing information on coal mining production. It is also the key technology to establish a "digital mine". By exploring current worldwide research, software and hardware tools and application demands, combined with the case study site (the Dazhuang mine of Pingdingshan coal group), an approach for 3D geo-visualization of a mine surface plant and mine roadway is deeply discussed. In this study, the rapid modeling method for a large range virtual scene based on Arc/Info and SiteBuilder3D is studied, and automatic generation of a 3D scene from a 2D scene is realized. Such an automatic method which can convert mine roadway systems from 2D to 3D is realized for the Dazhuang mine. Some relevant application questions are studied, including attribute query, coordinate query, distance measure, collision detection and the dynamic interaction between 2D and 3D virtual scenes in the virtual scene of a mine surface plant and mine roadway. A prototype system is designed and developed.

  2. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    Science.gov (United States)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool

  3. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  4. Visualization of the variability of 3D statistical shape models by animation.

    Science.gov (United States)

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  5. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    Science.gov (United States)

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  6. Research on steady-state visual evoked potentials in 3D displays

    Science.gov (United States)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  7. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    Science.gov (United States)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  8. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  9. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  10. Visualization of Hyperconjugation and Subsequent Structural Distortions through 3D Printing of Crystal Structures.

    Science.gov (United States)

    Mithila, Farha J; Oyola-Reynoso, Stephanie; Thuo, Martin M; Atkinson, Manza Bj

    2016-01-01

    Structural distortions due to hyperconjugation in organic molecules, like norbornenes, are well captured through X-ray crystallographic data, but are sometimes difficult to visualize especially for those applying chemical knowledge and are not chemists. Crystal structure from the Cambridge database were downloaded and converted to .stl format. The structures were then printed at the desired scale using a 3D printer. Replicas of the crystal structures were accurately reproduced in scale and any resulting distortions were clearly visible from the macroscale models. Through space interactions or effect of through space hyperconjugation was illustrated through loss of symmetry or distortions thereof. The norbornene structures exhibits distortion that cannot be observed through conventional ball and stick modelling kits. We show that 3D printed models derived from crystallographic data capture even subtle distortions in molecules. We translate such crystallographic data into scaled-up models through 3D printing.

  11. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    Science.gov (United States)

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  12. Suitability of online 3D visualization technique in oil palm plantation management

    Science.gov (United States)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  13. GEOSPATIAL DATA PROCESSING FOR 3D CITY MODEL GENERATION, MANAGEMENT AND VISUALIZATION

    Directory of Open Access Journals (Sweden)

    I. Toschi

    2017-05-01

    Full Text Available Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA and national mapping agencies (NMA involved in “smart city” applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above – http://seneca.fbk.eu. State-of-the-art processing solutions are investigated in order to (i efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching, (ii derive topologically and geometrically accurate 3D geo-objects (i.e. building models at various levels of detail and (iii link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy and Graz (Austria. Both spatial (i.e. nadir and oblique imagery and non-spatial (i.e. cadastral information and building energy consumptions data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  14. Geospatial Data Processing for 3d City Model Generation, Management and Visualization

    Science.gov (United States)

    Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S.

    2017-05-01

    Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in "smart city" applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above - http://seneca.fbk.eu). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  15. Does 3D produce more symptoms of visually induced motion sickness?

    Science.gov (United States)

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  16. Visualization of documents and concepts in neuroinformatics with the 3D-SE viewer

    Directory of Open Access Journals (Sweden)

    Antoine P Naud

    2007-11-01

    Full Text Available A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.

  17. 3D-visualization by MRI for surgical planning of Wilms tumors; 3-D-Visualisierung in der MRT zur Operationsplanung von Wilms-Tumoren

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, J.P.; Wunsch, R.; Jourdan, C.; Troeger, J. [Universitaetsklinik Heidelberg (Germany). Abteilung Paediatrische Radiologie; Waag, K.-L.; Guenther, P. [Universitaetsklinik Heidelberg (Germany). Abteilung Kinderchirurgie; Graf, N. [Universitaetsklinik Homburg (Germany). Abteilung Paediatrische Haematologie und Onkologie; Behnisch, W. [Universitaetsklinik Heidelberg (Germany). Abteilung Paediatrische Haematologie und Onkologie

    2004-10-01

    Purpose: To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. Materials and Methods: In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4-6 mm slices. Additionally, phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. Results: In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. Conclusion: For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected. (orig.)

  18. 3D visualization of optical ray aberration and its broadcasting to smartphones by ray aberration generator

    Science.gov (United States)

    Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru

    2017-11-01

    The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.

  19. Study of 3D visualization of fast active reflector based on openGL and EPICS

    International Nuclear Information System (INIS)

    Luo Mingcheng; Wu Wenqing; Liu Jiajing; Tang Pengyi; Wang Jian

    2014-01-01

    Active Reflector is the one of the innovations of Five hundred meter Aperture Spherical Telescope (FAST). Its performance will influence the performance of whole telescope and for display all status of ARS in real time, the EPICS (Experimental Physics and Industrial Control System) is used to develop the control system of ARS and virtual 3D technology-OpenGL is used to visualize the status. For the real-time performance of EPICS, the status visualization is also display in real time for users to improve the efficiency of telescope observing. (authors)

  20. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito [Nagoya University, Graduate School of Information Science, Nagoya (Japan); Yamada, Shohzoh; Naitoh, Munetaka [Aichi-Gakuin University, School of Dentistry, Nagoya (Japan)

    2007-06-15

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  1. Hybrid wide-angle viewing-endoscopic vitrectomy using a 3D visualization system

    Directory of Open Access Journals (Sweden)

    Kita M

    2018-02-01

    Full Text Available Mihori Kita, Yuki Mori, Sachiyo Hama Department of Ophthalmology, National Organization Kyoto Medical Center, Kyoto, Japan Purpose: To introduce a hybrid wide-angle viewing-endoscopic vitrectomy, which we have reported, using a 3D visualization system developed recently. Subjects and methods: We report a single center, retrospective, consecutive surgical case series of 113 eyes that underwent 25 G vitrectomy (rhegmatogenous retinal detachment or proliferative vitreoretinopathy, 49 eyes; epiretinal membrane, 18 eyes; proliferative diabetic retinopathy, 17 eyes; vitreous opacity or vitreous hemorrhage, 11 eyes; macular hole, 11 eyes; vitreomacular traction syndrome, 4 eyes; and luxation of intraocular lens, 3 eyes. Results: This system was successfully used to perform hybrid vitrectomy in the difficult cases, such as proliferative vitreoretinopathy and proliferative diabetic retinopathy. Conclusion: Hybrid wide-angle viewing-endoscopic vitrectomy using a 3D visualization system appears to be a valuable and promising method for managing various types of vitreoretinal disease. Keywords: 25 G vitrectomy, endoscope, wide-angle viewing system, 3D visualization system, hybrid

  2. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    International Nuclear Information System (INIS)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2007-01-01

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  3. Development of an environment for 3D visualization of riser dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bernardes Junior, Joao Luiz; Martins, Clovis de Arruda [Universidade de Sao Paulo (USP), SP (Brazil). Escola Politecnica]. E-mails: joao.bernardes@poli.usp.br; cmartins@usp.br

    2006-07-01

    This paper describes the merging of Virtual Reality and Scientific Visualization techniques in the development of Riser View, a multi platform 3D environment for real time, interactive visualization of riser dynamics. Its features, architecture, unusual collision detection algorithm and how up was customized for the project are discussed. Using Open GL through VRK, the software is able to make use of the resources available in most modern Graphics. Acceleration Hardware to improve performance. IUP/LED allows for native loo-and-feel in MS-Windows or Linux platform. The paper discusses conflicts that arise between scientific visualization and aspects such as realism and immersion, and how the visualization is prioritized. (author)

  4. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    Science.gov (United States)

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  5. 3-D vision and figure-ground separation by visual cortex.

    Science.gov (United States)

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  6. 3D visualization and stereographic techniques for medical research and education.

    Science.gov (United States)

    Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F

    2001-01-01

    While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a

  7. 3D Visualization of Urban Area Using Lidar Technology and CityGML

    Science.gov (United States)

    Popovic, Dragana; Govedarica, Miro; Jovanovic, Dusan; Radulovic, Aleksandra; Simeunovic, Vlado

    2017-12-01

    3D models of urban areas have found use in modern world such as navigation, cartography, urban planning visualization, construction, tourism and even in new applications of mobile navigations. With the advancement of technology there are much better solutions for mapping earth’s surface and spatial objects. 3D city model enables exploration, analysis, management tasks and presentation of a city. Urban areas consist of terrain surfaces, buildings, vegetation and other parts of city infrastructure such as city furniture. Nowadays there are a lot of different methods for collecting, processing and publishing 3D models of area of interest. LIDAR technology is one of the most effective methods for collecting data due the large amount data that can be obtained with high density and geometrical accuracy. CityGML is open standard data model for storing alphanumeric and geometry attributes of city. There are 5 levels of display (LoD0, LoD1, LoD2, LoD3, LoD4). In this study, main aim is to represent part of urban area of Novi Sad using LIDAR technology, for data collecting, and different methods for extraction of information’s using CityGML as a standard for 3D representation. By using series of programs, it is possible to process collected data, transform it to CityGML and store it in spatial database. Final product is CityGML 3D model which can display textures and colours in order to give a better insight of the cities. This paper shows results of the first three levels of display. They consist of digital terrain model and buildings with differentiated rooftops and differentiated boundary surfaces. Complete model gives us a realistic view of 3D objects.

  8. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    Science.gov (United States)

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  9. Development of a 3-D Nuclear Event Visualization Program Using Unity

    Science.gov (United States)

    Kuhn, Victoria

    2017-09-01

    Simulations have become increasingly important for science and there is an increasing emphasis on the visualization of simulations within a Virtual Reality (VR) environment. Our group is exploring this capability as a visualization tool not just for those curious about science, but also for educational purposes for K-12 students. Using data collected in 3-D by a Time Projection Chamber (TPC), we are able to visualize nuclear and cosmic events. The Unity game engine was used to recreate the TPC to visualize these events and construct a VR application. The methods used to create these simulations will be presented along with an example of a simulation. I will also present on the development and testing of this program, which I carried out this past summer at MSU as part of an REU program. We used data from the S πRIT TPC, but the software can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.

  10. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    Science.gov (United States)

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  11. COMPARISON OF USER PERFORMANCE WITH INTERACTIVE AND STATIC 3D VISUALIZATION – PILOT STUDY

    Directory of Open Access Journals (Sweden)

    L. Herman

    2016-06-01

    Full Text Available Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  12. Interactive visualization and analysis of 3D medical images for neurosurgery

    International Nuclear Information System (INIS)

    Miyazawa, Tatsuo; Otsuki, Taisuke.

    1994-01-01

    We propose a method that makes it possible to interactively rotate and zoom a volume-rendered object and to interactively manipulate a function for transferring data values to color and opacity. The method ray-traces a Value-Intensity-Strength volume (VIS volume) instead of a color-opacity volume, and uses an adaptive refinement technique in generating images. The VIS volume tracing method can reduce by 20-90 percent the computational time of re-calculation necessitated by changing the function for transferring data values to color and opacity, and can reduce the computational time of pre-processing by 20 percent. It can also reduce the required memory space by 40 percent. The combination of VIS volume tracing and adaptive refinement method makes it possible to interactively visualize and analyze 3D medical image data. Once we can see detailed image of 3D objects to determine their orientation, we can easily manipulate the viewing and rendering parameters even while viewing rough, blurred images. The increase in the computation time for generating full-resolution images by using the adaptive refinement technique is approximately five to ten percent. Its effectiveness is evaluated by using the results of visualization for some 3D medical image data. (author)

  13. TOUCH INTERACTION WITH 3D GEOGRAPHICAL VISUALIZATION ON WEB: SELECTED TECHNOLOGICAL AND USER ISSUES

    Directory of Open Access Journals (Sweden)

    L. Herman

    2016-10-01

    Full Text Available The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users’ performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  14. Magnetic assembly of 3D cell clusters: visualizing the formation of an engineered tissue.

    Science.gov (United States)

    Ghosh, S; Kumar, S R P; Puri, I K; Elankumaran, S

    2016-02-01

    Contactless magnetic assembly of cells into 3D clusters has been proposed as a novel means for 3D tissue culture that eliminates the need for artificial scaffolds. However, thus far its efficacy has only been studied by comparing expression levels of generic proteins. Here, it has been evaluated by visualizing the evolution of cell clusters assembled by magnetic forces, to examine their resemblance to in vivo tissues. Cells were labeled with magnetic nanoparticles, then assembled into 3D clusters using magnetic force. Scanning electron microscopy was used to image intercellular interactions and morphological features of the clusters. When cells were held together by magnetic forces for a single day, they formed intercellular contacts through extracellular fibers. These kept the clusters intact once the magnetic forces were removed, thus serving the primary function of scaffolds. The cells self-organized into constructs consistent with the corresponding tissues in vivo. Epithelial cells formed sheets while fibroblasts formed spheroids and exhibited position-dependent morphological heterogeneity. Cells on the periphery of a cluster were flattened while those within were spheroidal, a well-known characteristic of connective tissues in vivo. Cells assembled by magnetic forces presented visual features representative of their in vivo states but largely absent in monolayers. This established the efficacy of contactless assembly as a means to fabricate in vitro tissue models. © 2016 John Wiley & Sons Ltd.

  15. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    International Nuclear Information System (INIS)

    Bancroft, G.; Plessel, T.; Merritt, F.; Watson, V.

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers. 7 refs

  16. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    Science.gov (United States)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  17. Ricostruzione di una scena urbana 3D utilizzando VisualSfM.

    Directory of Open Access Journals (Sweden)

    Laura Inzerillo

    2013-10-01

    Full Text Available Le tecniche di computer vision oggi danno la possibilità di costruire in maniera rapida e automatica modelli 3D dettagliati a partire da dataset fotografici. La comunità accademica ha visto una crescente attenzione alla ricostruzione 3D a scala urbana. Tra i vari strumenti oggi a disposizione spicca VisualSfM sviluppato dall’università di Washingthon e Google. Si tratta di una Interfaccia grafica open source strutturata in algoritmi dedicati alla tecnica di Structure from Motion (SfM. VisualSfM utilizza un estrattore di features chiamato SIFTGPU e un algoritmo di Bundle Adjustment Multicore. Inoltre è possibile ottenere una nuvola di punti densa utilizzando gli algoritmi CMVS/PMVS2. La finalità di questo studio è di verificare l’accuratezza metrica delle ricostruzioni attraverso l’utilizzo integrato di VisualSfM e CMVS/PMVS2. L’approccio quindi è stato testato su diversi dataset di una certa entità strutturati da collezioni fotografiche ragionate. 

  18. Revealing Social Values by 3D City Visualization in City Transformations

    Directory of Open Access Journals (Sweden)

    Tim Johansson

    2016-02-01

    Full Text Available Social sustainability is a widely used concept in urban planning research and practice. However, knowledge of spatial distributions of social values and aspects of social sustainability is required. Visualization of these distributions is also highly valuable, but challenging, and rarely attempted in sparsely populated urban environments in rural areas. This article presents a method that highlights social values in spatial models through 3D visualization, describes the methodology to generate the models, and discusses potential applications. The models were created using survey, building, infrastructure and demographic data for Gällivare, Sweden, a small city facing major transformation due to mining subsidence. It provides an example of how 3D models of important social sustainability indices can be designed to display citizens’ attitudes regarding their financial status, the built environment, social inclusion and welfare services. The models helped identify spatial variations in perceptions of the built environment that correlate (inter alia with closeness to certain locations, gender and distances to public buildings. Potential uses of the model for supporting efforts by practitioners, researchers and citizens to visualize and understand social values in similar urban environments are discussed, together with ethical issues (particularly regarding degrees of anonymity concerning its wider use for inclusive planning.

  19. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo, E-mail: kes7741@snu.ac.kr

    2015-04-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches.

  20. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo

    2015-01-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches

  1. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    Science.gov (United States)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  2. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    Science.gov (United States)

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise

  3. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  4. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  5. Discussion on the 3D visualizing of 1:200 000 geological map

    Science.gov (United States)

    Wang, Xiaopeng

    2018-01-01

    Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.

  6. Arena3D: visualizing time-driven phenotypic differences in biological systems

    Directory of Open Access Journals (Sweden)

    Secrier Maria

    2012-03-01

    Full Text Available Abstract Background Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes. Results Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene lsm14a in cytokinesis is suggested. We also show how phenotypic

  7. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    Science.gov (United States)

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  8. Three-dimensional visualization of ensemble weather forecasts – Part 1: The visualization tool Met.3D (version 1.0

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2015-07-01

    Full Text Available We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output – 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment campaign.

  9. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  10. Impacts of a CAREER Award on Advancing 3D Visualization in Geology Education

    Science.gov (United States)

    Billen, M. I.

    2011-12-01

    CAREER awards provide a unique opportunity to develop educational activities as an integrated part of one's research activities. This CAREER award focused on developing interactive 3D visualization tools to aid geology students in improving their 3D visualization skills. Not only is this a key skill for field geologists who need to visualize unseen subsurface structures, but it is also an important aspect of geodynamic research into the processes, such as faulting and viscous flow, that occur during subduction. Working with an undergraduate student researcher and using the KeckCAVES developed volume visualization code 3DVisualizer, we have developed interactive visualization laboratory exercises (e.g., Discovering the Rule of Vs) and a suite of mini-exercises using illustrative 3D geologic structures (e.g., syncline, thrust fault) that students can explore (e.g., rotate, slice, cut-away) to understand how exposure of these structures at the surface can provide insight into the subsurface structure. These exercises have been integrated into the structural geology curriculum and made available on the web through the KeckCAVES Education website as both data-and-code downloads and pre-made movies. One of the main challenges of implementing research and education activities through the award is that progress must be made on both throughout the award period. Therefore, while our original intent was to use subduction model output as the structures in the educational models, delays in the research results required that we develop these models using other simpler input data sets. These delays occurred because one of the other goals of the CAREER grant is to allow the faculty to take their research in a new direction, which may certainly lead to transformative science, but can also lead to more false-starts as the challenges of doing the new science are overcome. However, having created the infrastructure for the educational components, use of the model results in future

  11. 3D Visualization of Trees Based on a Sphere-Board Model

    Directory of Open Access Journals (Sweden)

    Jiangfeng She

    2018-01-01

    Full Text Available Because of the smooth interaction of tree systems, the billboard and crossed-plane techniques of image-based rendering (IBR have been used for tree visualization for many years. However, both the billboard-based tree model (BBTM and the crossed-plane tree model (CPTM have several notable limitations; for example, they give an impression of slicing when viewed from the top side, and they produce an unimpressive stereoscopic effect and insufficient lighted effects. In this study, a sphere-board-based tree model (SBTM is proposed to eliminate these defects and to improve the final visual effects. Compared with the BBTM or CPTM, the proposed SBTM uses one or more sphere-like 3D geometric surfaces covered with a virtual texture, which can present more details about the foliage than can 2D planes, to represent the 3D outline of a tree crown. However, the profile edge presented by a continuous surface is overly smooth and regular, and when used to delineate the outline of a tree crown, it makes the tree appear very unrealistic. To overcome this shortcoming and achieve a more natural final visual effect of the tree model, an additional process is applied to the edge of the surface profile. In addition, the SBTM can better support lighted effects because of its cubic geometrical features. Interactive visualization effects for a single tree and a grove are presented in a case study of Sabina chinensis. The results show that the SBTM can achieve a better compromise between realism and performance than can the BBTM or CPTM.

  12. Virtual reality hardware for use in interactive 3D data fusion and visualization

    Science.gov (United States)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  13. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  14. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  15. 3D pattern of brain atrophy in HIV/AIDS visualized using tensor-based morphometry

    Science.gov (United States)

    Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2011-01-01

    35% of HIV-infected patients have cognitive impairment, but the profile of HIV-induced brain damage is still not well understood. Here we used tensor-based morphometry (TBM) to visualize brain deficits and clinical/anatomical correlations in HIV/AIDS. To perform TBM, we developed a new MRI-based analysis technique that uses fluid image warping, and a new α-entropy-based information-theoretic measure of image correspondence, called the Jensen–Rényi divergence (JRD). Methods 3D T1-weighted brain MRIs of 26 AIDS patients (CDC stage C and/or 3 without HIV-associated dementia; 47.2 ± 9.8 years; 25M/1F; CD4+ T-cell count: 299.5 ± 175.7/µl; log10 plasma viral load: 2.57 ± 1.28 RNA copies/ml) and 14 HIV-seronegative controls (37.6 ± 12.2 years; 8M/6F) were fluidly registered by applying forces throughout each deforming image to maximize the JRD between it and a target image (from a control subject). The 3D fluid registration was regularized using the linearized Cauchy–Navier operator. Fine-scale volumetric differences between diagnostic groups were mapped. Regions were identified where brain atrophy correlated with clinical measures. Results Severe atrophy (~15–20% deficit) was detected bilaterally in the primary and association sensorimotor areas. Atrophy of these regions, particularly in the white matter, correlated with cognitive impairment (P=0.033) and CD4+ T-lymphocyte depletion (P=0.005). Conclusion TBM facilitates 3D visualization of AIDS neuropathology in living patients scanned with MRI. Severe atrophy in frontoparietal and striatal areas may underlie early cognitive dysfunction in AIDS patients, and may signal the imminent onset of AIDS dementia complex. PMID:17035049

  16. A 3D visualization of spatial relationship between geological structure and groundwater chemical profile around Iwate volcano, Japan: based on the ARCGIS 3D Analyst

    Science.gov (United States)

    Shibahara, A.; Ohwada, M.; Itoh, J.; Kazahaya, K.; Tsukamoto, H.; Takahashi, M.; Morikawa, N.; Takahashi, H.; Yasuhara, M.; Inamura, A.; Oyama, Y.

    2009-12-01

    We established 3D geological and hydrological model around Iwate volcano to visualize 3D relationships between subsurface structure and groundwater profile. Iwate volcano is a typical polygenetic volcano located in NE Japan, and its body is composed of two stratovolcanoes which have experienced sector collapses several times. Because of this complex structure, groundwater flow around Iwate volcano is strongly restricted by subsurface construction. For example, Kazahaya and Yasuhara (1999) clarified that shallow groundwater in north and east flanks of Iwate volcano are recharged at the mountaintop, and these flow systems are restricted in north and east area because of the structure of younger volcanic body collapse. In addition, Ohwada et al. (2006) found that these shallow groundwater in north and east flanks have relatively high concentration of major chemical components and high 3He/4He ratios. In this study, we succeeded to visualize the spatial relationship between subsurface structure and chemical profile of shallow and deep groundwater system using 3D model on the GIS. In the study region, a number of geological and hydrological datasets, such as boring log data and groundwater chemical profile, were reported. All these paper data are digitized and converted to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. We also inputted digital elevation model (DEM) around Iwate volcano issued by the Geographical Survey Institute of Japan, and digital geological maps issued by Geological Survey of Japan, AIST. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer.

  17. Emotions and personality traits as high-level factors in visual attention: a review

    Directory of Open Access Journals (Sweden)

    Kai eKaspar

    2012-11-01

    Full Text Available The visual sense has outstanding significance for human perception and behavior, and visual attention plays a central role in the processing of the sensory input. Thereby, multiple low- and high-level factors contribute to the guidance of attention. The present review focuses on two neglected high-level factors: emotion and personality. The review starts with an overview of different models of attention, providing a conceptual framework and illustrating the nature of low- and high-level factors in visual attention. Then, the ambiguous concept of emotion is described, and recommendations are made for the experimental practice. In the following, we present several studies showing the influence of emotion on overt attention, whereby the distinction between internally and externally located emotional impacts is emphasized. We also provide evidence showing that emotional stimuli influence perceptual processing outside of the focus of attention, whereby results in this field are mixed. Then, we present some detached studies showing the reversed causal effect: attention can also affect emotional responses. The final section on emotion–attention interactions addresses the interplay on the neuronal level, which has been neglected for a long time in neuroscience. In this context, several conceptual recommendations for future research are made. Finally, based on findings showing inter-individual differences in human sensitivity to emotional items, we introduce the wide range of time-independent personality traits that also influence attention, and in this context we try to raise awareness of the consideration of inter-individual differences in the field of neuroscience.

  18. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  19. Development of tactile floor plan for the blind and the visually impaired by 3D printing technique

    Directory of Open Access Journals (Sweden)

    Raša Urbas

    2016-07-01

    Full Text Available The aim of the research was to produce tactile floor plans for blind and visually impaired people for the use in the museum. For the production of tactile floor plans 3D printing technique was selected among three different techniques. 3D prints were made of white and colored ABS polymer materials. Development of different elements of tactile floor plans, as well as the problems and the solutions during 3D printing, are described in the paper.

  20. 3D visualization of the initial Yersinia ruckeri infection route in rainbow trout (Oncorhynchus mykiss) by optical projection tomography

    DEFF Research Database (Denmark)

    Otani, Maki; Villumsen, Kasper Rømer; Kragelund Strøm, Helene

    2014-01-01

    , optical projection tomography (OPT), a novel three-dimensional (3D) bio-imaging technique, was applied. OPT not only enables the visualization of Y. ruckeri on mucosal surfaces but also the 3D spatial distribution in whole organs, without sectioning. Rainbow trout were infected by bath challenge exposure...

  1. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    Science.gov (United States)

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  2. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    Science.gov (United States)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  3. An amalgamation of 3D city models in urban air quality modelling for improving visual impact analysis

    DEFF Research Database (Denmark)

    Ujang, U.; Anton, F.; Ariffin, A.

    2015-01-01

    is predominantly vehicular engines, the situation will become worse when pollutants are trapped between buildings and disperse inside the street canyon and move vertically to create a recirculation vortex. Studying and visualizing the recirculation zone in 3D visualization is conceivable by using 3D city models......,engineers and policy makers to design the street geometry (building height and width, green areas, pedestrian walks, roads width, etc.)....

  4. Regional subsidence history and 3D visualization with MATLAB of the Vienna Basin, central Europe

    Science.gov (United States)

    Lee, E.; Novotny, J.; Wagreich, M.

    2013-12-01

    This study reconstructed the subsidence history by the backstripping and 3D visualization techniques, to understand tectonic evolution of the Neogene Vienna Basin. The backstripping removes the compaction effect of sediment loading and quantifies the tectonic subsidence. The amount of decompaction was calculated by porosity-depth relationships evaluated from seismic velocity data acquired from two boreholes. About 100 wells have been investigated to quantify the subsidence history of the Vienna Basin. The wells have been sorted into 10 groups; N1-4 in the northern part, C1-4 in the central part and L1-2 in the northernmost and easternmost parts, based on their position within the same block bordered by major faults. To visualize 3D subsidence maps, the wells were arranged to a set of 3D points based on their map location (x, y) and depths (z1, z2, z3 ...). The division of the stratigraphic column and age range was arranged based on the Central Paratethys regional Stages. In this study, MATLAB, a numerical computing environment, was used to calculate the TPS interpolation function. The Thin-Plate Spline (TPS) can be employed to reconstruct a smooth surface from a set of 3D points. The basic physical model of the TPS is based on the bending behavior of a thin metal sheet that is constrained only by a sparse set of fixed points. In the Lower Miocene, 3D subsidence maps show strong evidence that the pre-Neogene basement of the Vienna Basin was subsiding along borders of the Alpine-Carpathian nappes. This subsidence event is represented by a piggy-back basin developed on top of the NW-ward moving thrust sheets. In the late Lower Miocene, Group C and N display a typical subsidence pattern for the pull-apart basin with a very high subsidence event (0.2 - 1.0 km/Ma). After the event, Group N shows remarkably decreasing subsidence, following the thin-skinned extension which was regarded as the extension model of the Vienna Basin in the literature. But the subsidence in

  5. Design and implementation of a 3D ocean virtual reality and visualization engine

    Science.gov (United States)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  6. Detecting and visualizing internal 3D oleoresin in agarwood by means of micro-computed tomography

    International Nuclear Information System (INIS)

    Khairiah Yazid; Roslan Yahya; Mat Rosol Awang

    2012-01-01

    Detection and analysis of oleoresin is particularly significant since the commercial value of agarwood is related to the quantity of oleoresins that are present. A modern technique of non-destructive may reach the interior region of the wood. Currently, tomographic image data in particular is most commonly visualized in three dimensions using volume rendering. The aim of this paper is to explore the potential of high resolution non-destructive 3D visualization technique, X-ray micro-computed tomography, as imaging tools to visualize micro-structure oleoresin in agarwood. Investigations involving desktop X-ray micro-tomography system on high grade agarwood sample, performed at the Centre of Tomography in Nuclear Malaysia, demonstrate the applicability of the method. Prior to experiments, a reference test was conducted to stimulate the attenuation of oleoresin in agarwood. Based on the experiment results, micro-CT imaging with voxel size 7.0 μm is capable to of detecting oleoresin and pores in agarwood. This imaging technique, although sophisticated can be used for standard development especially in grading of agarwood for commercial activities. (author)

  7. Exposure to organic solvents used in dry cleaning reduces low and high level visual function.

    Directory of Open Access Journals (Sweden)

    Ingrid Astrid Jiménez Barbosa

    significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners.Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance.

  8. CALCULATING SOLAR ENERGY POTENTIAL OF BUILDINGS AND VISUALIZATION WITHIN UNITY 3D GAME ENGINE

    Directory of Open Access Journals (Sweden)

    G. Buyuksalih

    2017-10-01

    Full Text Available Solar energy modelling is increasingly popular, important, and economic significant in solving energy crisis for big cities. It is a clean and renewable resource of energy that can be utilized to accommodate individual or group of buildings electrical power as well as for indoor heating. Implementing photovoltaic system (PV in urban areas is one of the best options to solve power crisis over expansion of urban and the growth of population. However, as the spaces for solar panel installation in cities are getting limited nowadays, the available strategic options are only at the rooftop and façade of the building. Thus, accurate information and selecting building with the highest potential solar energy amount collected is essential in energy planning, environmental conservation, and sustainable development of the city. Estimating the solar energy/radiation from rooftop and facade are indeed having a limitation - the shadows from other neighbouring buildings. The implementation of this solar estimation project for Istanbul uses CityGML LoD2-LoD3. The model and analyses were carried out using Unity 3D Game engine with development of several customized tools and functionalities. The results show the estimation of potential solar energy received for the whole area per day, week, month and year thus decision for installing the solar panel could be made. We strongly believe the Unity game engine platform could be utilized for near future 3D mapping visualization purposes.

  9. Calculating Solar Energy Potential of Buildings and Visualization Within Unity 3d Game Engine

    Science.gov (United States)

    Buyuksalih, G.; Bayburt, S.; Baskaraca, A. P.; Karim, H.; Rahman, A. Abdul

    2017-10-01

    Solar energy modelling is increasingly popular, important, and economic significant in solving energy crisis for big cities. It is a clean and renewable resource of energy that can be utilized to accommodate individual or group of buildings electrical power as well as for indoor heating. Implementing photovoltaic system (PV) in urban areas is one of the best options to solve power crisis over expansion of urban and the growth of population. However, as the spaces for solar panel installation in cities are getting limited nowadays, the available strategic options are only at the rooftop and façade of the building. Thus, accurate information and selecting building with the highest potential solar energy amount collected is essential in energy planning, environmental conservation, and sustainable development of the city. Estimating the solar energy/radiation from rooftop and facade are indeed having a limitation - the shadows from other neighbouring buildings. The implementation of this solar estimation project for Istanbul uses CityGML LoD2-LoD3. The model and analyses were carried out using Unity 3D Game engine with development of several customized tools and functionalities. The results show the estimation of potential solar energy received for the whole area per day, week, month and year thus decision for installing the solar panel could be made. We strongly believe the Unity game engine platform could be utilized for near future 3D mapping visualization purposes.

  10. PointCloudExplore 2: Visual exploration of 3D gene expression

    Energy Technology Data Exchange (ETDEWEB)

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  11. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  12. VAP3D: a software for dosimetric analysis and visualization of phantons

    International Nuclear Information System (INIS)

    Lima, Lindeval Fernandes de; Lima, Fernando Roberto de Andrade

    2011-01-01

    The anthropomorphic models used in computational dosimetry of the ionizing radiation, usually called voxel phantom, are produced from image stacks CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) obtained from patient or volunteer scanning. These phantoms are the geometry to be radiated in the computing arrangements of exposure, using a Monte Carlo code, allowing the estimation of the energy deposited in each voxel of the virtual body. From these data collected in the simulation, it is possible to evaluate the average absorbed dose in various organs and tissues radiosensitive cataloged by the International Commission on Radiological Protection (ICRP). Therefore, a computational model of the exhibition is constituted primarily by the Monte Carlo code to simulate the transport, deposition and interaction of radiation and the phantom being irradiated. The construction of voxel phantoms requires computer skills like a transformation format of images, compression of 2D images for 3D image construction, quantization, resampling and image segmentation, among others. Hardly the computational dosimetry researcher finds all these skills into a single software and often this results in a decrease in the pace of their research or the use, sometimes inadequate, the alternative tools. This paper presents the VAP3D (Visualization and Analysis of Phantoms), a software developed with Qt/VTK with C++, in order to operationalize some of the tasks mentioned above. The current version has been based on DIP software (Digital Imaging Processing), containing the File menu, Conversions and tools, where the user interacts with the software. (author)

  13. 3D PATTERN OF BRAIN ABNORMALITIES IN WILLIAMS SYNDROME VISUALIZED USING TENSOR-BASED MORPHOMETRY

    Science.gov (United States)

    Chiang, Ming-Chang; Reiss, Allan L.; Lee, Agatha D.; Bellugi, Ursula; Galaburda, Albert M.; Korenberg, Julie R.; Mills, Debra L.; Toga, Arthur W.; Thompson, Paul M.

    2009-01-01

    Williams syndrome (WS) is a neurodevelopmental disorder associated with deletion of ~20 contiguous genes in chromosome band 7q11.23. Individuals with WS exhibit mild to moderate mental retardation, but are relatively more proficient in specific language and musical abilities. We used tensor-based morphometry (TBM) to visualize the complex pattern of gray/white matter reductions in WS, based on fluid registration of structural brain images. Methods 3D T1-weighted brain MRIs of 41 WS subjects (age: 29.2±9.2SD years; 23F/18M) and 39 age-matched healthy controls (age: 27.5±7.4 years; 23F/16M) were fluidly registered to a minimum deformation target. Fine-scale volumetric differences were mapped between diagnostic groups. Local regions were identified where regional structure volumes were associated with diagnosis, and with intelligence quotient (IQ) scores. Brain asymmetry was also mapped and compared between diagnostic groups. Results WS subjects exhibited widely distributed brain volume reductions (~10–15% reduction; P < 0.0002, permutation test). After adjusting for total brain volume, the frontal lobes, anterior cingulate, superior temporal gyrus, amygdala, fusiform gyrus and cerebellum were found to be relatively preserved in WS, but parietal and occipital lobes, thalamus and basal ganglia, and midbrain were disproportionally decreased in volume (P < 0.0002). These regional volumes also correlated positively with performance IQ in adult WS subjects (age ≥ 30 years, P = 0.038). Conclusion TBM facilitates 3D visualization of brain volume reductions in WS. Reduced parietal/occipital volumes may be associated with visuospatial deficits in WS. By contrast, frontal lobes, amygdala, and cingulate gyrus are relatively preserved or even enlarged, consistent with unusual affect regulation and language production in WS. PMID:17512756

  14. Thoracic cavity definition for 3D PET/CT analysis and visualization.

    Science.gov (United States)

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W; Higgins, William E

    2015-07-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    Science.gov (United States)

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  16. Research on fine management and visualization of ancient architectures based on integration of 2D and 3D GIS technology

    International Nuclear Information System (INIS)

    Jun, Yan; Shaohua, Wang; Jiayuan, Li; Qingwu, Hu

    2014-01-01

    Aimed at ancient architectures which own the characteristics of huge data quantity, fine-grained and high-precise, a 3D fine management and visualization method for ancient architectures based on the integration of 2D and 3D GIS is proposed. Firstly, after analysing various data types and characters of digital ancient architectures, main problems and key technologies existing in the 2D and 3D data management are discussed. Secondly, data storage and indexing model of digital ancient architecture based on 2D and 3D GIS integration were designed and the integrative storage and management of 2D and 3D data were achieved. Then, through the study of data retrieval method based on the space-time indexing and hierarchical object model of ancient architecture, 2D and 3D interaction of fine-grained ancient architectures 3D models was achieved. Finally, take the fine database of Liangyi Temple belonging to Wudang Mountain as an example, fine management and visualization prototype of 2D and 3D integrative digital ancient buildings of Liangyi Temple was built and achieved. The integrated management and visual analysis of 10GB fine-grained model of the ancient architecture was realized and a new implementation method for the store, browse, reconstruction, and architectural art research of ancient architecture model was provided

  17. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    Science.gov (United States)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  18. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    Science.gov (United States)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points

  19. 3-D visualization of ensemble weather forecasts - Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Science.gov (United States)

    Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present the application of interactive 3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and forecast wind field resolution. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (three to seven days before take-off).

  20. 3D Visualization of Sheath Folds in Roman Marble from Ephesus, Turkey

    Science.gov (United States)

    Wex, Sebastian; Passchier, Cornelis W.; de Kemp, Eric A.; Ilhan, Sinan

    2013-04-01

    Excavation of a palatial 2nd century AD house (Terrace House Two) in the ancient city of Ephesus, Turkey in the 1970s produced 10.313 pieces of colored, folded marble which belonged to 54 marble plates of 1.6 cm thickness that originally covered the walls of the banquet hall of the house. The marble plates were completely reassembled and restored by a team of workers over the last 6 years. The plates were recognized as having been sawn from two separate large blocks of "Cipollino verde", a green mylonitized marble from Karystos on the Island of Euboea, Greece. After restoration, it became clear that all slabs had been placed on the wall in approximately the sequence in which they had been cut off by a Roman stone saw. As a result, the marble plates give a full 3D insight in the folded internal structure of 1m3 block of mylonite. The restoration of the slabs was recognized as a first, unique opportunity for detailed reconstruction of the 3D geometry of m-scale folds in mylonitized marble. Photographs were taken of each slab and used to reconstruct their exact arrangement within the originally quarried blocks. Outlines of layers were digitized and a full 3D reconstruction of the internal structure of the block was created using ArcMap and GOCAD. Fold structures in the block include curtain folds and multilayered sheath folds. Several different layers showing these structures were digitized on the photographs of the slab surfaces and virtually mounted back together within the model of the marble block. Due to the serial sectioning into slabs, with cm-scale spacing, the visualization of the 3D geometry of sheath folds was accomplished with a resolution better than 4 cm. Final assembled 3D images reveal how sheath folds emerge from continuous layers and show their overall consistency as well as a constant hinge line orientation of the fold structures. Observations suggest that a single deformation phase was responsible for the evolution of "Cipollino verde" structures

  1. Webs on the Web (WOW): 3D visualization of ecological networks on the WWW for collaborative research and education

    Science.gov (United States)

    Yoon, Ilmi; Williams, Rich; Levine, Eli; Yoon, Sanghyuk; Dunne, Jennifer; Martinez, Neo

    2004-06-01

    This paper describes information technology being developed to improve the quality, sophistication, accessibility, and pedagogical simplicity of ecological network data, analysis, and visualization. We present designs for a WWW demonstration/prototype web site that provides database, analysis, and visualization tools for research and education related to food web research. Our early experience with a prototype 3D ecological network visualization guides our design of a more flexible architecture design. 3D visualization algorithms include variable node and link sizes, placements according to node connectivity and tropic levels, and visualization of other node and link properties in food web data. The flexible architecture includes an XML application design, FoodWebML, and pipelining of computational components. Based on users" choices of data and visualization options, the WWW prototype site will connect to an XML database (Xindice) and return the visualization in VRML format for browsing and further interactions.

  2. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    Science.gov (United States)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  3. EARTHSCAPE, A MULTI-PURPOSE INTERACTIVE 3D GLOBE VIEWER FOR HYBRID DATA VISUALIZATION AND ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Sarthou

    2015-08-01

    Full Text Available The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane, raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  4. KENO3D visualization tool for KENO V.a geometry models

    International Nuclear Information System (INIS)

    Bowman, S.M.; Horwedel, J.E.

    1999-01-01

    The standardized computer analyses for licensing evaluations (SCALE) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a three-dimensional Monte Carlo criticality computer code. Criticality safety analysis often require detailed modeling of complex geometries. Checking the accuracy of these models can be enhanced by effective visualization tools. To address this need, ORNL has recently developed a powerful state-of-the-art visualization tool called KENO3D that enables KENO V.a users to interactively display their three-dimensional geometry models. The interactive options include the following: (1) having shaded or wireframe images; (2) showing standard views, such as top view, side view, front view, and isometric three-dimensional view; (3) rotating the model; (4) zooming in on selected locations; (5) selecting parts of the model to display; (6) editing colors and displaying legends; (7) displaying properties of any unit in the model; (8) creating cutaway views; (9) removing units from the model; and (10) printing image or saving image to common graphics formats

  5. Motivation and Academic Improvement Using Augmented Reality for 3D Architectural Visualization

    Directory of Open Access Journals (Sweden)

    David FONSECA ESCUDERO

    2016-05-01

    Full Text Available This paper discuss about the results from the evaluation of the motivation, user profile and level of satisfaction in the workflow using 3D augmented visualization of complex models in educational environments. The study shows the results of different experiments conducted with first and second year students from Architecture and Science and Construction Technologies (Old Spanish degree of Building Engineering, which is recognized at a European level. We have used a mixed method combining both quantitative and qualitative student assessment in order to complete a general overview of using new technologies, mobile devices and advanced visual methods in academic environments. The results show us how the students involved in the experiments improved their academic results and their implication in the subject, which allow us to conclude that the hybrid technologies improve both spatial skills and the student motivation, a key concept in the actual educational framework composed by digital-native students and a great range of different applications and interfaces useful for teaching and learning.

  6. 3D visualization software to analyze topological outcomes of topoisomerase reactions

    Science.gov (United States)

    Darcy, I. K.; Scharein, R. G.; Stasiak, A.

    2008-01-01

    The action of various DNA topoisomerases frequently results in characteristic changes in DNA topology. Important information for understanding mechanistic details of action of these topoisomerases can be provided by investigating the knot types resulting from topoisomerase action on circular DNA forming a particular knot type. Depending on the topological bias of a given topoisomerase reaction, one observes different subsets of knotted products. To establish the character of topological bias, one needs to be aware of all possible topological outcomes of intersegmental passages occurring within a given knot type. However, it is not trivial to systematically enumerate topological outcomes of strand passage from a given knot type. We present here a 3D visualization software (TopoICE-X in KnotPlot) that incorporates topological analysis methods in order to visualize, for example, knots that can be obtained from a given knot by one intersegmental passage. The software has several other options for the topological analysis of mechanisms of action of various topoisomerases. PMID:18440983

  7. 3D Visualization Tools to Support Soil Management In Relation to Sustainable Agriculture and Ecosystem Services

    Science.gov (United States)

    Wang, Chen

    2017-04-01

    Visualization tools [1][2][6] have been used increasingly as part of information, consultation, and collaboration in relation to issues of global significance. Visualization techniques can be used in a variety of different settings, depending on their association with specific types of decision. Initially, they can be used to improve awareness of the local community and landscape, either individually or in groups [5]. They can also be used to communicate different aspects of change, such as digital soil mapping, ecosystem services and climate change [7][8]. A prototype 3D model was developed to present Tarland Catchment on the North East Scotland which includes 1:25000 soil map data and 1:50000 land capability for agriculture (LCA) data [4]. The model was used to identify issues arising between the growing interest soil monitoring and management, and the potential effects on existing soil characteristics. The online model was also created which can capture user/stakeholder comments they associate with soil features. In addition, people are located physically within the real-world bounds of the current soil management scenario, they can use Augmented Reality to see the scenario overlaid on their immediate surroundings. Models representing alternative soil use and management were used in the virtual landscape theatre (VLT) [3]with electronic voting designed to elicit public aspirations and concerns regarding future soil uses, and to develop scenarios driven by local input. Preliminary findings suggest positive audience responses to the relevance of the inclusion of soil data within a scene when considering questions regarding the impact of land-use change, such as woodland, agricultural land and open spaces. A future development is the use of the prototype virtual environment in a preference survey of scenarios of changes in land use, and in stakeholder consultations on such changes.END Rua, H. and Alvito, P. (2011) Living the past: 3D models, virtual reality and

  8. Tiny but complex - interactive 3D visualization of the interstitial acochlidian gastropod Pseudunela cornuta (Challis, 1970

    Directory of Open Access Journals (Sweden)

    Heß Martin

    2009-09-01

    Full Text Available Abstract Background Mesopsammic acochlidians are small, and organ complexity may be strongly reduced (regressive evolution by progenesis, especially in microhedylacean species. The marine interstitial hedylopsacean Pseudunela cornuta (Challis, 1970, however, was suggested as having a complex reproductive system resembling that of much larger, limnic and benthic species. The present study aims to reconstruct the detailed anatomy and true complexity of P. cornuta from serial, semithin histological sections by using modern computer-based 3D visualization with Amira software, and to explain it in an evolutionary context. Results Our results demonstrate considerable discordance with the original species description, which was based solely on paraffin sections. Here, we show that the nervous system of P. cornuta has paired rhinophoral, optic and gastro-oesophageal ganglia, three distinct ganglia on the visceral nerve cord, and a putative osphradial ganglion, while anterior accessory ganglia are absent. The presence of an anal genital cloaca is clearly rejected and the anus, nephropore and gonopore open separately to the exterior; the circulatory and excretory systems are well-differentiated, including a two-chambered heart and a complex kidney with a long, looped nephroduct; the special androdiaulic reproductive system shows two allosperm receptacles, three nidamental glands, a cavity with unknown function, as well as highly complex anterior copulatory organs with two separate glandular and impregnatory systems including a penial stylet that measures approximately a third of the whole length of the preserved specimen. Conclusion In spite of its small body size, the interstitial hermaphroditic P. cornuta shows high complexity regarding all major organ systems; the excretory system is as differentiated as in species of the sister clade, the limnic and much larger Acochlidiidae, and the reproductive system is by far the most elaborated one ever observed

  9. InterMap3D: predicting and visualizing co-evolving protein residues

    DEFF Research Database (Denmark)

    Oliveira, Rodrigo Gouveia; Roque, francisco jose sousa simôes almeida; Wernersson, Rasmus

    2009-01-01

    InterMap3D predicts co-evolving protein residues and plots them on the 3D protein structure. Starting with a single protein sequence, InterMap3D automatically finds a set of homologous sequences, generates an alignment and fetches the most similar 3D structure from the Protein Data Bank (PDB......). It can also accept a user-generated alignment. Based on the alignment, co-evolving residues are then predicted using three different methods: Row and Column Weighing of Mutual Information, Mutual Information/Entropy and Dependency. Finally, InterMap3D generates high-quality images of the protein...

  10. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    Science.gov (United States)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  11. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  12. Reconstructing the Curve-Skeletons of 3D Shapes Using the Visual Hull.

    Science.gov (United States)

    Livesu, Marco; Guggeri, Fabio; Scateni, Riccardo

    2012-11-01

    Curve-skeletons are the most important descriptors for shapes, capable of capturing in a synthetic manner the most relevant features. They are useful for many different applications: from shape matching and retrieval, to medical imaging, to animation. This has led, over the years, to the development of several different techniques for extraction, each trying to comply with specific goals. We propose a novel technique which stems from the intuition of reproducing what a human being does to deduce the shape of an object holding it in his or her hand and rotating. To accomplish this, we use the formal definitions of epipolar geometry and visual hull. We show how it is possible to infer the curve-skeleton of a broad class of 3D shapes, along with an estimation of the radii of the maximal inscribed balls, by gathering information about the medial axes of their projections on the image planes of the stereographic vision. It is definitely worth to point out that our method works indifferently on (even unoriented) polygonal meshes, voxel models, and point clouds. Moreover, it is insensitive to noise, pose-invariant, resolution-invariant, and robust when applied to incomplete data sets.

  13. Scaling Quelccaya: Using 3-D Animation and Satellite Data To Visualize Climate Change

    Science.gov (United States)

    Malone, A.; Leich, M.

    2017-12-01

    The near-global glacier retreat of recent decades is among the most convincing evidence for contemporary climate change. The epicenter of this action, however, is often far from population-dense centers. How can a glacier's scale, both physical and temporal, be communicated to those faraway? This project, an artists-scientist collaboration, proposes an alternate system for presenting climate change data, designed to evoke a more visceral response through a visual, geospatial, poetic approach. Focusing on the Quelccaya Ice Cap, the world's largest tropical glaciated area located in the Peruvian Andes, we integrate 30 years of satellite imagery and elevation models with 3D animation and gaming software, to bring it into a virtual juxtaposition with a model of the city of Chicago. Using Chicago as a cosmopolitan North American "measuring stick," we apply glaciological models to determine, for instance, the amount of ice that has melted on Quelccaya over the last 30 years and what the height of an equivalent amount of snow would fall on the city of Chicago (circa 600 feet, higher than the Willis Tower). Placing the two sites in a framework of intimate scale, we present a more imaginative and psychologically-astute manner of portraying the sober facts of climate change, by inviting viewers to learn and consider without inducing fear.

  14. A 3D Visualization Method for Bladder Filling Examination Based on EIT

    Directory of Open Access Journals (Sweden)

    Wei He

    2012-01-01

    Full Text Available As the researches of electric impedance tomography (EIT applications in medical examinations deepen, we attempt to produce the visualization of 3D images of human bladder. In this paper, a planar electrode array system will be introduced as the measuring platform and a series of feasible methods are proposed to evaluate the simulated volume of bladder to avoid overfilling. The combined regularization algorithm enhances the spatial resolution and presents distinguishable sketch of disturbances from the background, which provides us with reliable data from inverse problem to carry on to the three-dimensional reconstruction. By detecting the edge elements and tracking down the lost information, we extract quantitative morphological features of the object from the noises and background. Preliminary measurements were conducted and the results showed that the proposed algorithm overcomes the defects of holes, protrusions, and debris in reconstruction. In addition, the targets' location in space and roughly volume could be calculated according to the grid of finite element of the model, and this feature was never achievable for the previous 2D imaging.

  15. 3D Visualization of Solar Data: Preparing for Solar Orbiter and Parker Solar Probe

    Science.gov (United States)

    Mueller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; Ireland, J.; Fleck, B.

    2017-12-01

    Solar Orbiter and Parker Solar Probe will focus on exploring the linkage between the Sun and the heliosphere. These new missions will collect unique data that will allow us to study, e.g., the coupling between macroscopic physical processes to those on kinetic scales, the generation of solar energetic particles and their propagation into the heliosphere and the origin and acceleration of solar wind plasma. Combined with the several petabytes of data from NASA's Solar Dynamics Observatory, the scientific community will soon have access to multi­dimensional remote-sensing and complex in-situ observations from different vantage points, complemented by petabytes of simulation data. Answering overarching science questions like "How do solar transients drive heliospheric variability and space weather?" will only be possible if the community has the necessary tools at hand. In this contribution, we will present recent progress in visualizing the Sun and its magnetic field in 3D using the open-source JHelioviewer framework, which is part of the ESA/NASA Helioviewer Project.

  16. The history of visual magic in computers how beautiful images are made in CAD, 3D, VR and AR

    CERN Document Server

    Peddie, Jon

    2013-01-01

    If you have ever looked at a fantastic adventure or science fiction movie, or an amazingly complex and rich computer game, or a TV commercial where cars or gas pumps or biscuits behaved liked people and wondered, ""How do they do that?"",  then you've experienced the magic of 3D worlds generated by a computer.3D in computers began as a way to represent automotive designs and illustrate the construction of molecules. 3D graphics use evolved to visualizations of simulated data and artistic representations of imaginary worlds. In order to overcome the processing limitations of the computer, graph

  17. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering

    International Nuclear Information System (INIS)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L.

    2006-01-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [de

  18. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    Science.gov (United States)

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  19. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    KAUST Repository

    Bach, Benjamin

    2017-08-29

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user\\'s real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  20. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    Science.gov (United States)

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  1. A 3D Visualization and Analysis Model of the Earth Orbit, Milankovitch Cycles and Insolation.

    Science.gov (United States)

    Kostadinov, Tihomir; Gilb, Roy

    2013-04-01

    Milankovitch theory postulates that periodic variability of Earth's orbital elements is a major climate forcing mechanism. Although controversies remain, ample geologic evidence supports the major role of the Milankovitch cycles in climate, e.g. glacial-interglacial cycles. There are three Milankovitch orbital parameters: orbital eccentricity (main periodicities of ~100,000 and ~400,000 years), precession (quantified as the longitude of perihelion, main periodicities 19,000-24,000 years) and obliquity of the ecliptic (Earth's axial tilt, main periodicity 41,000 years). The combination of these parameters controls the spatio-temporal patterns of incoming solar radiation (insolation) and the timing of the seasons with respect to perihelion, as well as season duration. The complex interplay of the Milankovitch orbital parameters on various time scales makes assessment and visualization of Earth's orbit and insolation variability challenging. It is difficult to appreciate the pivotal importance of Kepler's laws of planetary motion in controlling the effects of Milankovitch cycles on insolation patterns. These factors also make Earth-Sun geometry and Milankovitch theory difficult to teach effectively. Here, an astronomically precise and accurate Earth orbit visualization model is presented. The model offers 3D visualizations of Earth's orbital geometry, Milankovitch parameters and the ensuing insolation forcings. Both research and educational uses are envisioned for the model, which is developed in Matlab® as a user-friendly graphical user interface (GUI). We present the user with a choice between the Berger et al. (1978) and Laskar et al. (2004) astronomical solutions for eccentricity, obliquity and precession. A "demo" mode is also available, which allows the three Milankovitch parameters to be varied independently of each other (and over much larger ranges than the naturally occurring ones), so the user can isolate the effects of each parameter on orbital geometry

  2. Investigation of Rho Signaling Pathways in 3-D Collagen Matrices with Multidimensional Microscopy and Visualization Techniques

    National Research Council Canada - National Science Library

    Trier, Steven

    2008-01-01

    .... Recent progress in the development of 3D culture models has provided a more physiologically relevant growth environment, in which breast cancer cells imbedded within floating collagen matrices...

  3. Investigation of Rho Signaling Pathways in 3D Collagen Matrices via Multidimensional Microscopy and Visualization Techniques

    National Research Council Canada - National Science Library

    Trier, Steven

    2007-01-01

    .... Recent progress in the development of 3D culture models has provided a more physiologically relevant growth environment, in which breast cancer cells imbedded within floating collagen matrices...

  4. 3D visualization of the internal nanostructure of polyamide thin films in RO membranes

    KAUST Repository

    Pacheco Oreamuno, Federico

    2015-11-02

    The front and back surfaces of fully aromatic polyamide thin films isolated from reverse osmosis (RO) membranes were characterized by TEM, SEM and AFM. The front surfaces were relatively rough showing polyamide protuberances of different sizes and shapes; the back surfaces were all consistently smoother with very similar granular textures formed by polyamide nodules of 20–50 nm. Occasional pore openings of approximately the same size as the nodules were observed on the back surfaces. Because traditional microscopic imaging techniques provide limited information about the internal morphology of the thin films, TEM tomography was used to create detailed 3D visualizations that allowed the examination of any section of the thin film volume. These tomograms confirmed the existence of numerous voids within the thin films and revealed structural characteristics that support the water permeance difference between brackish water (BWRO) and seawater (SWRO) RO membranes. Consistent with a higher water permeance, the thin film of the BWRO membrane ESPA3 contained relatively more voids and thinner sections of polyamide than the SWRO membrane SWC3. According to the tomograms, most voids originate near the back surface and many extend all the way to the front surface shaping the polyamide protuberances. Although it is possible for the internal voids to be connected to the outside through the pore openings on the back surface, it was verified that some of these voids comprise nanobubbles that are completely encapsulated by polyamide. TEM tomography is a powerful technique for investigating the internal nanostructure of polyamide thin films. A comprehensive knowledge of the nanostructural distribution of voids and polyamide sections within the thin film may lead to a better understanding of mass transport and rejection mechanisms in RO membranes.

  5. 3D visualization of the internal nanostructure of polyamide thin films in RO membranes

    KAUST Repository

    Pacheco Oreamuno, Federico; Sougrat, Rachid; Reinhard, Martin; Leckie, James O.; Pinnau, Ingo

    2015-01-01

    The front and back surfaces of fully aromatic polyamide thin films isolated from reverse osmosis (RO) membranes were characterized by TEM, SEM and AFM. The front surfaces were relatively rough showing polyamide protuberances of different sizes and shapes; the back surfaces were all consistently smoother with very similar granular textures formed by polyamide nodules of 20–50 nm. Occasional pore openings of approximately the same size as the nodules were observed on the back surfaces. Because traditional microscopic imaging techniques provide limited information about the internal morphology of the thin films, TEM tomography was used to create detailed 3D visualizations that allowed the examination of any section of the thin film volume. These tomograms confirmed the existence of numerous voids within the thin films and revealed structural characteristics that support the water permeance difference between brackish water (BWRO) and seawater (SWRO) RO membranes. Consistent with a higher water permeance, the thin film of the BWRO membrane ESPA3 contained relatively more voids and thinner sections of polyamide than the SWRO membrane SWC3. According to the tomograms, most voids originate near the back surface and many extend all the way to the front surface shaping the polyamide protuberances. Although it is possible for the internal voids to be connected to the outside through the pore openings on the back surface, it was verified that some of these voids comprise nanobubbles that are completely encapsulated by polyamide. TEM tomography is a powerful technique for investigating the internal nanostructure of polyamide thin films. A comprehensive knowledge of the nanostructural distribution of voids and polyamide sections within the thin film may lead to a better understanding of mass transport and rejection mechanisms in RO membranes.

  6. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2001-01-01

    This paper introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The paper emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  7. Hands-On Data Analysis: Using 3D Printing to Visualize Reaction Progress Surfaces

    Science.gov (United States)

    Higman, Carolyn S.; Situ, Henry; Blacklin, Peter; Hein, Jason E.

    2017-01-01

    Advances in 3D printing technology over the past decade have led to its expansion into all subfields of science, including chemistry. This technology provides useful teaching tools that facilitate communication of difficult chemical concepts to students and researchers. Presented here is the use of 3D printing technology to create tangible models…

  8. Visualization of the ROOT 3D class objects with openInventor-like viewers

    CERN Document Server

    Fine, V; Kulikova, A; Panebrattsev, M

    2004-01-01

    The class library for conversion of the ROOT 3D class objects to the iv format for 3D image viewers is described in this paper. At present the library was tested using the STAR and ATLAS detector geometry without any changes and revision for concrete detector.

  9. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2002-01-01

    The author introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The author emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  10. 3D visualization of medical images for personalized learning of human anatomy

    NARCIS (Netherlands)

    Laurence Alpay; Jelle Scheurleer; Harmen Bijwaard

    2015-01-01

    to be held in Lisbon/Portugal on October 15-17, 2015 Medical imaging nowadays often yields high definition 3D images (from CT, PET, MRI, etc.). Usually these images need to be evaluated on 2D monitors. In the transition from 3D to 2D the image becomes more difficult to interpret as a whole. To aid

  11. 3D Nondestructive Visualization and Evaluation of TRISO Particles Distribution in HTGR Fuel Pebbles Using Cone-Beam Computed Tomography

    Directory of Open Access Journals (Sweden)

    Gongyi Yu

    2017-01-01

    Full Text Available A nonuniform distribution of tristructural isotropic (TRISO particles within a high-temperature gas-cooled reactor (HTGR pebble may lead to excessive thermal gradients and nonuniform thermal expansion during operation. If the particles are closely clustered, local hotspots may form, leading to excessive stresses on particle layers and an increased probability of particle failure. Although X-ray digital radiography (DR is currently used to evaluate the TRISO distributions in pebbles, X-ray DR projection images are two-dimensional in nature, which would potentially miss some details for 3D evaluation. This paper proposes a method of 3D visualization and evaluation of the TRISO distribution in HTGR pebbles using cone-beam computed tomography (CBCT: first, a pebble is scanned on our high-resolution CBCT, and 2D cross-sectional images are reconstructed; secondly, all cross-sectional images are restructured to form the 3D model of the pebble; then, volume rendering is applied to segment and display the TRISO particles in 3D for visualization and distribution evaluation. For method validation, several pebbles were scanned and the 3D distributions of the TRISO particles within the pebbles were produced. Experiment results show that the proposed method provides more 3D than DR, which will facilitate pebble fabrication research and production quality control.

  12. Educational Material for 3D Visualization of Spine Procedures: Methods for Creation and Dissemination.

    Science.gov (United States)

    Cramer, Justin; Quigley, Edward; Hutchins, Troy; Shah, Lubdha

    2017-06-01

    Spine anatomy can be difficult to master and is essential for performing spine procedures. We sought to utilize the rapidly expanding field of 3D technology to create freely available, interactive educational materials for spine procedures. Our secondary goal was to convey lessons learned about 3D modeling and printing. This project involved two parallel processes: the creation of 3D-printed physical models and interactive digital models. We segmented illustrative CT studies of the lumbar and cervical spine to create 3D models and then printed them using a consumer 3D printer and a professional 3D printing service. We also included downloadable versions of the models in an interactive eBook and platform-independent web viewer. We then provided these educational materials to residents with a pretest and posttest to assess efficacy. The "Spine Procedures in 3D" eBook has been downloaded 71 times as of October 5, 2016. All models used in the book are available for download and printing. Regarding test results, the mean exam score improved from 70 to 86%, with the most dramatic improvement seen in the least experienced trainees. Participants reported increased confidence in performing lumbar punctures after exposure to the material. We demonstrate the value of 3D models, both digital and printed, in learning spine procedures. Moreover, 3D printing and modeling is a rapidly expanding field with a large potential role for radiologists. We have detailed our process for creating and sharing 3D educational materials in the hopes of motivating and enabling similar projects.

  13. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    Science.gov (United States)

    2006-10-01

    Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment

  14. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Directory of Open Access Journals (Sweden)

    Jeff A Tracey

    Full Text Available Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  15. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    Science.gov (United States)

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  16. Visualization of the 3D shape of the articular cartilage of the femoral head from MR images

    International Nuclear Information System (INIS)

    Kubota, Tetsuya; Sato, Yoshinobu; Nakanishi, Katsuyuki

    1999-01-01

    This paper describes methods for visualizing the three-dimensional (3D) cartilage thickness distribution from MR images. Cartilage thickness is one of the most important factors in joint diseases. Although the evaluation of cartilage thickness has received considerable attention from orthopedic surgeons and radiologists, evaluation is usually performed based on visual analysis or measurements obtained using calipers on original MR images. Our aim is to employ computerized quantification of MR images for the evaluation of the cartilage thickness of the femoral head. First, we extract an ROI and interpolate all ROI images by sinc interpolation. Next, we extract cartilage regions from MR images using a 3D multiscale sheet filter. Finally, we reconstruct 3D shapes by summing the extracted cartilage regions. We investigate partial volume effects in this method using synthesized images, and show results for in vitro and in vivo MR images. (author)

  17. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Science.gov (United States)

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  18. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    Science.gov (United States)

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  19. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    Science.gov (United States)

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  20. High-Resolution Visual 3D Recontructions for Rapid Archaeological Characterization

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The final output will be geotiffs and a custom 3D texture model format that allows for dynamic level-of-detail rendering. The work discussed in the proposal will...

  1. 3D printing based on cardiac CT assists anatomic visualization prior to transcatheter aortic valve replacement.

    Science.gov (United States)

    Ripley, Beth; Kelil, Tatiana; Cheezum, Michael K; Goncalves, Alexandra; Di Carli, Marcelo F; Rybicki, Frank J; Steigner, Mike; Mitsouras, Dimitrios; Blankstein, Ron

    2016-01-01

    3D printing is a promising technique that may have applications in medicine, and there is expanding interest in the use of patient-specific 3D models to guide surgical interventions. To determine the feasibility of using cardiac CT to print individual models of the aortic root complex for transcatheter aortic valve replacement (TAVR) planning as well as to determine the ability to predict paravalvular aortic regurgitation (PAR). This retrospective study included 16 patients (9 with PAR identified on blinded interpretation of post-procedure trans-thoracic echocardiography and 7 age, sex, and valve size-matched controls with no PAR). 3D printed models of the aortic root were created from pre-TAVR cardiac computed tomography data. These models were fitted with printed valves and predictions regarding post-implant PAR were made using a light transmission test. Aortic root 3D models were highly accurate, with excellent agreement between annulus measurements made on 3D models and those made on corresponding 2D data (mean difference of -0.34 mm, 95% limits of agreement: ± 1.3 mm). The 3D printed valve models were within 0.1 mm of their designed dimensions. Examination of the fit of valves within patient-specific aortic root models correctly predicted PAR in 6 of 9 patients (6 true positive, 3 false negative) and absence of PAR in 5 of 7 patients (5 true negative, 2 false positive). Pre-TAVR 3D-printing based on cardiac CT provides a unique patient-specific method to assess the physical interplay of the aortic root and implanted valves. With additional optimization, 3D models may complement traditional techniques used for predicting which patients are more likely to develop PAR. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  2. First impressions of 3D visual tools and dose volume histograms for plan evaluation

    International Nuclear Information System (INIS)

    Rattray, G.; Simitcioglu, A.; Parkinson, M.; Biggs, J.

    1999-01-01

    Converting from 2D to 3D treatment planning offers numerous challenges. The practices that have evolved in the 2D environment may not be applicable when translated into the 3D environment. One such practice is the methods used to evaluate a plan. In 2D planning a plane by plane comparison method is generally practiced. This type of evaluation method would not be appropriate for plans produced by a 3D planning system. To this end 3D dose displays and Dose Volume Histograms (DVHs) have been developed to facilitate the evaluation of such plans. A survey was conducted to determine the impressions of Radiation Therapists as they used these tools for the first time. The survey involved comparing a number of plans for a small group of patients and selecting the best plan for each patient. Three evaluation methods were assessed. These included the traditional plane by plane, 3D dose display, and DVHs. Those surveyed found the DVH to be the easiest of the three methods to use, with the 3D display being the next easiest. Copyright (1999) Blackwell Science Pty Ltd

  3. The 3D Visualization of Slope Terrain in Sun Moon Lake.

    Science.gov (United States)

    Deng, F.; Gwo-shyn, S.; Pei-Kun, L.

    2015-12-01

    side-slope using the multi-beam sounder below the water surface. Finally, the image of the side-scan sonar is taken and merges with contour lines produced from underwater topographic DTM data. Combining those data, our purpose is by creating different 3D images to have good visualization checking the data of side-slope DTM surveys if they are in well qualified controlled.

  4. Visualization of cranial nerves I-XII: value of 3D CISS and T2-weighted FSE sequences

    Energy Technology Data Exchange (ETDEWEB)

    Yousry, I.; Camelio, S.; Wiesmann, M.; Brueckmann, H.; Yousry, T.A. [Department of Neuroradiology, Klinikum Grosshadern, Ludwig-Maximilians University, Marchioninistrasse 15, D-81377 Munich (Germany); Schmid, U.D. [Neurosurgical Unit, Klinik im Park, 8000 Zurich (Switzerland); Horsfield, M.A. [Department of Medical Physics, University of Leicester, Leicester LE1 5WW (United Kingdom)

    2000-07-01

    The aim of this study was to evaluate the sensitivity of the three-dimensional constructive interference of steady state (3D CISS) sequence (slice thickness 0.7 mm) and that of the T2-weighted fast spin echo (T2-weighted FSE) sequence (slice thickness 3 mm) for the visualization of all cranial nerves in their cisternal course. Twenty healthy volunteers were examined using the T2-weighted FSE and the 3D CISS sequences. Three observers evaluated independently the cranial nerves NI-NXII in their cisternal course. The rates for successful visualization of each nerve for 3D CISS (and for T2-weighted FSE in parentheses) were as follows: NI, NII, NV, NVII, NVIII 40 of 40 (40 of 40), NIII 40 of 40 (18 of 40), NIV 19 of 40 (3 of 40), NVI 39 of 40 (5 of 40), NIX, X, XI 40 of 40 (29 of 40), and NXII 40 of 40 (4 of 40). Most of the cranial nerves can be reliably assessed when using the 3D CISS and the T2-weighted FSE sequences. Increasing the spatial resolution when using the 3D CISS sequence increases the reliability of the identification of the cranial nerves NIII-NXII. (orig.)

  5. Three-dimensional (3D) visualization of reflow porosity and modeling of deformation in Pb-free solder joints

    International Nuclear Information System (INIS)

    Dudek, M.A.; Hunter, L.; Kranz, S.; Williams, J.J.; Lau, S.H.; Chawla, N.

    2010-01-01

    The volume, size, and dispersion of porosity in solder joints are known to affect mechanical performance and reliability. Most of the techniques used to characterize the three-dimensional (3D) nature of these defects are destructive. With the enhancements in high resolution computed tomography (CT), the detection limits of intrinsic microstructures have been significantly improved. Furthermore, the 3D microstructure of the material can be used in finite element models to understand their effect on microscopic deformation. In this paper we describe a technique utilizing high resolution (< 1 μm) X-ray tomography for the three-dimensional (3D) visualization of pores in Sn-3.9Ag-0.7Cu/Cu joints. The characteristics of reflow porosity, including volume fraction and distribution, were investigated for two reflow profiles. The size and distribution of porosity size were visualized in 3D for four different solder joints. In addition, the 3D virtual microstructure was incorporated into a finite element model to quantify the effect of voids on the lap shear behavior of a solder joint. The presence, size, and location of voids significantly increased the severity of strain localization at the solder/copper interface.

  6. Geometric characterization and interactive 3D visualization of historical and cultural heritage in the province of Cáceres (Spain

    Directory of Open Access Journals (Sweden)

    José Manuel Naranjo

    2018-01-01

    Full Text Available The three-dimensional (3D visualization of historical and cultural heritage in the province of Cáceres is essential for tourism promotion. This study uses panoramic spherical photography and terrestrial laser scanning (TLS for the geometric characterization and cataloguing of sites of cultural interest, according to the principles of the Charter of Krakow. The benefits of this project include improved knowledge dissemination of the cultural heritage of Cáceres in a society that demands state-of-the-art tourist information. In this sense, this study has three specific aims: to develop a highly reliable methodology for modeling heritage based on a combination of non-destructive geomatics methods; to design and develop software modules for interactive 3D visualization of models; and to promote knowledge of the historical and cultural heritage of Cáceres by creating a hypermedia atlas accessible via the Internet. Through this free-of-charge hypermedia atlas, the tourist accesses 3D photographic and interactive scenes, videos created by 3D point clouds obtained from laser scanning and 3D models available for downloading in ASCII format, and thus acquire a greater knowledge of the touristic attractions in the province of Cáceres.

  7. Visualization of cranial nerves I-XII: value of 3D CISS and T2-weighted FSE sequences

    International Nuclear Information System (INIS)

    Yousry, I.; Camelio, S.; Wiesmann, M.; Brueckmann, H.; Yousry, T.A.; Schmid, U.D.; Horsfield, M.A.

    2000-01-01

    The aim of this study was to evaluate the sensitivity of the three-dimensional constructive interference of steady state (3D CISS) sequence (slice thickness 0.7 mm) and that of the T2-weighted fast spin echo (T2-weighted FSE) sequence (slice thickness 3 mm) for the visualization of all cranial nerves in their cisternal course. Twenty healthy volunteers were examined using the T2-weighted FSE and the 3D CISS sequences. Three observers evaluated independently the cranial nerves NI-NXII in their cisternal course. The rates for successful visualization of each nerve for 3D CISS (and for T2-weighted FSE in parentheses) were as follows: NI, NII, NV, NVII, NVIII 40 of 40 (40 of 40), NIII 40 of 40 (18 of 40), NIV 19 of 40 (3 of 40), NVI 39 of 40 (5 of 40), NIX, X, XI 40 of 40 (29 of 40), and NXII 40 of 40 (4 of 40). Most of the cranial nerves can be reliably assessed when using the 3D CISS and the T2-weighted FSE sequences. Increasing the spatial resolution when using the 3D CISS sequence increases the reliability of the identification of the cranial nerves NIII-NXII. (orig.)

  8. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  9. Dissemination of 3D Visualizations of Complex Function Data for the NIST Digital Library of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Qiming Wang

    2007-03-01

    Full Text Available The National Institute of Standards and Technology (NIST is developing a digital library to replace the widely used National Bureau of Standards Handbook of Mathematical Functions published in 1964. The NIST Digital Library of Mathematical Functions (DLMF will include formulas, methods of computation, references, and links to software for over forty functions. It will be published both in hardcopy format and as a website featuring interactive navigation, a mathematical equation search, 2D graphics, and dynamic interactive 3D visualizations. This paper focuses on the development and accessibility of the 3D visualizations for the digital library. We examine the techniques needed to produce accurate computations of function data, and through a careful evaluation of several prototypes, we address the advantages and disadvantages of using various technologies, including the Virtual Reality Modeling Language (VRML, interactive embedded graphics, and video capture to render and disseminate the visualizations in an environment accessible to users on various platforms.

  10. Technical Note: Reliability of Suchey-Brooks and Buckberry-Chamberlain methods on 3D visualizations from CT and laser scans

    DEFF Research Database (Denmark)

    Villa, Chiara; Buckberry, Jo; Cattaneo, Cristina

    2013-01-01

    Previous studies have reported that the ageing method of Suchey-Brooks (pubic bone) and some of the features applied by Lovejoy et al. and Buckberry-Chamberlain (auricular surface) can be confidently performed on 3D visualizations from CT-scans. In this study, seven observers applied the Suchey......-Brooks and the Buckberry-Chamberlain methods on 3D visualizations based on CT-scans and, for the first time, on 3D visualizations from laser scans. We examined how the bone features can be evaluated on 3D visualizations and whether the different modalities (direct observations of bones, 3D visualization from CT......-observer agreement was obtained in the evaluation of the pubic bone in all modalities. In 3D visualizations of the auricular surfaces, transverse organization and apical changes could be evaluated, although with high inter-observer variability; micro-, macroporosity and surface texture were very difficult to score...

  11. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    Directory of Open Access Journals (Sweden)

    Carolin Helbig

    Full Text Available To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality, a user-friendly interface, and suitability for cooperative work.Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and

  12. THE VISUALIZATION METHOD OF THE 3D CONCENTRATION DISTRIBUTION OF ASIAN DUST IN THE GOOGLE EARTH

    Directory of Open Access Journals (Sweden)

    W. Okuda

    2012-07-01

    Full Text Available The Asian dust (called "Kosa" in Japan transported from desert areas in the northern China often covers over East Asia in the late winter and spring seasons. In this study, first of all, for dust events observed at various places in Japan on April 1, 2007 and March 21, 2010, the long-range transport simulation of Asian dust from desert areas in the northern China to Japan is carried out. Next, the method for representing 3D dust clouds by means of the image overlay functionality provided in the Google Earth is described. Since it is very difficult to display 3D dust clouds along the curvature of the Earth on the global scale, the 3D dust cloud distributed at the altitude of about 6300m was divided into many thin layers, each of which is the same thickness. After each of layers was transformed to the image layer, each image layer was displayed at the appropriate altitude in the Google Earth. Thus obtained image layers were displayed every an hour in the Google Earth. Finally, it is shown that 3D Asian dust clouds generated by the method described in this study are represented as smooth 3D cloud objects even if we looked at Asian dust clouds transversely in the Google Earth.

  13. 3-D visualization and quantitation of microvessels in transparent human colorectal carcinoma [corrected].

    Directory of Open Access Journals (Sweden)

    Yuan-An Liu

    Full Text Available Microscopic analysis of tumor vasculature plays an important role in understanding the progression and malignancy of colorectal carcinoma. However, due to the geometry of blood vessels and their connections, standard microtome-based histology is limited in providing the spatial information of the vascular network with a 3-dimensional (3-D continuum. To facilitate 3-D tissue analysis, we prepared transparent human colorectal biopsies by optical clearing for in-depth confocal microscopy with CD34 immunohistochemistry. Full-depth colons were obtained from colectomies performed for colorectal carcinoma. Specimens were prepared away from (control and at the tumor site. Taking advantage of the transparent specimens, we acquired anatomic information up to 200 μm in depth for qualitative and quantitative analyses of the vasculature. Examples are given to illustrate: (1 the association between the tumor microstructure and vasculature in space, including the perivascular cuffs of tumor outgrowth, and (2 the difference between the 2-D and 3-D quantitation of microvessels. We also demonstrate that the optically cleared mucosa can be retrieved after 3-D microscopy to perform the standard microtome-based histology (H&E staining and immunohistochemistry for systematic integration of the two tissue imaging methods. Overall, we established a new tumor histological approach to integrate 3-D imaging, illustration, and quantitation of human colonic microvessels in normal and cancerous specimens. This approach has significant promise to work with the standard histology to better characterize the tumor microenvironment in colorectal carcinoma.

  14. Technical report on implementation of reactor internal 3D modeling and visual database system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation`s integrated computer aided engineering system, such as Mitsubishi`s NUWINGS (Japan), AECL`s CANDID (Canada) and Duke Power`s PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new.

  15. Technical report on implementation of reactor internal 3D modeling and visual database system

    International Nuclear Information System (INIS)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation's integrated computer aided engineering system, such as Mitsubishi's NUWINGS (Japan), AECL's CANDID (Canada) and Duke Power's PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new

  16. 3D Multi-Channel Networked Visualization System for National LambdaRail, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Multichannel virtual reality visualization is the future of complex simulation with a large number of visual channels rendered and transmitted over high-speed...

  17. 3D visualization based customer experiences of nuclear plant control room

    International Nuclear Information System (INIS)

    Sun Tienlung; Chou Chinmei; Hung Tamin; Cheng Tsungchieh; Yang Chihwei; Yang Lichen

    2011-01-01

    This paper employs virtual reality (VR) technology to develop an interactive virtual nuclear plant control room in which the general public could easily walk into the 'red zone' and play with the control buttons. The VR-based approach allows deeper and richer customer experiences that the real nuclear plant control room could not offer. When people know more about the serious process control procedures enforced in the nuclear plant control room, they will appropriate more about the safety efforts imposed by the nuclear plant and become more comfortable about the nuclear plant. The virtual nuclear plant control room is built using a 3D game development tool called Unity3D. The 3D scene is connected to a nuclear plant simulation system through Windows API programs. To evaluate the usability of the virtual control room, an experiment will be conducted to see how much 'immersion' the users could feel when they played with the virtual control room. (author)

  18. P1-1: The Effect of Convergence Training on Visual Discomfort in 3D TV Viewing

    Directory of Open Access Journals (Sweden)

    Hyun Min Jeon

    2012-10-01

    Full Text Available The present study investigated whether convergence training has an effect on reducing visual discomfort in viewing a stereoscopic TV. Participants were assigned into either a training group or a control group. In the training group, one of the two different training procedures is provided: gradual change or random change in the disparities of bar stimulus which was used for convergence training. Training itself was very effective so that convergence fusional range was improved after 3 repeated trainings with intervals of two weeks. In order to evaluate the effect of convergence training on visual discomfort, the visual discomfort in 3D TV viewing was measured before and after training sessions. The results showed that a significant reduction in visual discomfort was found after training only in one training group. These results demonstrated a repeated convergence training might be helpful in reducing the visual discomfort. Further studies should be needed to set the most effective parameters of training of this pattern.

  19. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  20. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    Science.gov (United States)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  1. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    Science.gov (United States)

    Kadioglu, S.

    2009-04-01

    Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method Selma KADIOGLU Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr Anatolia has always been more the point of transit, a bridge between West and East. Anatolia has been a home for ideas moving from all directions. So it is that in the Roman and post-Roman periods the role of Anatolia in general and of Ancyra (the Roman name of Ankara) in particular was of the greatest importance. Now, the visible archaeological remains of Roman period in Ankara are Roman Bath, Gymnasium, the Temple of Augustus of Rome, Street, Theatre, City Defence-Wall. The Caesar Augustus, the first Roman Emperor, conquered Asia Minor in 25 BC. Then a marble temple was built in Ancyra, the administrative capital of province, today the capital of Turkish Republic, Ankara. This monument was consecrated to the Empreror and to the Goddess Rome. This temple is supposed to have built over an earlier temple dedicated to Kybele and Men between 25 -20 BC. After the death of the Augustus in 14AD, a copy of the text of "Res Gestae Divi Augusti" was inscribed on the interior of the pronaos in Latin, whereas a Greek translation is also present on an exterior wall of the cella. In the 5th century, it was converted in to a church by the Byzantines. The aim of this study is to determine old buried archaeological remains in the Augustus temple, Roman Bath and in the governorship agora in Ulus district. These remains were imaged with transparent three dimensional (3D) visualization of the ground penetrating radar (GPR) data. Parallel two dimensional (2D) GPR profile data were acquired in the study areas, and then a 3D data volume were built using parallel 2D GPR data. A simplified amplitude-colour range and appropriate opacity function were constructed and transparent 3D image were obtained to activate buried

  2. Interfacce e tecnologie visual 3D per conoscere, condividere e valorizzare il patrimonio culturale

    Directory of Open Access Journals (Sweden)

    Elena Ippoliti

    2012-11-01

    Full Text Available L’articolo illustra gli esiti della ricerca “Modelli informativi integrati per conoscere, valorizzare e condividere il patrimonio urbano e ambientale. Sperimentare interfacce 3D per oggetti culturali geografici: l'architettura delle informazioni e l'architettura informatica” che si è posta come principale obiettivo quello di dilatare il concetto di “modello informativo integrato”, approfondendolo, attraverso integrazioni e sovrapposizioni con diversi ambienti, nelle direzioni sia dello spazio geografico (3D-GIS e dello spazio Web (3D-WEB e 3D-GIS-WEB, sia della Realtà Aumentata e del Virtuale Aumentato. La ricerca ha mirato ad individuare sistemi tecnologici, procedurali e operativi, diversamente articolati in relazione a casi individuati, privilegiando anche tecnologie basate su strumentazioni di facile uso, a basso costo e/o open source, ma sempre affidabili relativamente alla qualità dei dati elaborati. In tale contesto si sono condotte differenti sperimentazioni, secondo vari percorsi/scale di lettura e corrispondenti organizzazioni dei dati, scegliendo come ambito privilegiato delle applicazioni il centro storico di Ascoli Piceno.

  3. A 3D visualization approach for process training in office environments

    NARCIS (Netherlands)

    Aysolmaz, Banu; Brown, Ross; Bruza, Peter; Reijers, Hajo A.

    2016-01-01

    Process participants need to learn how to perform in the context of their business processes. Process training is challenging due to cognitive difficulties in relating process model elements to real world concepts. In this paper we present a 3D VirtualWorld (VW) process training approach for office

  4. Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method

    Science.gov (United States)

    Dan, A.; Reiner, M.

    2018-01-01

    Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…

  5. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Science.gov (United States)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  6. Endolympathic hydrops in patients with vestibular schwannoma: visualization by non-contrast-enhanced 3D FLAIR

    International Nuclear Information System (INIS)

    Naganawa, Shinji; Kawai, Hisashi; Sone, Michihiko; Nakashima, Tsutomu; Ikeda, Mitsuru

    2011-01-01

    Signal intensity of ipsilateral labyrinthine lymph fluid has been reported to increase in most cases with vestibular schwannoma (VS) on 3D fluid attenuated inversion recovery (FLAIR). The purpose of this study was twofold, (1) to evaluate if endolymphatic space can be recognized in the patients with VS on non-contrast-enhanced 3D-FLAIR images and (2) to know if the vertigo in the patients with VS correlates to vestibular endolymphatic hydrops. From the introduction of 32-channel head coil at 3 T in May 2008 to June 2010, 15 cases with unilateral VS were identified in the radiology report database. The two cases without a significant signal increase on 3D FLAIR were excluded. Resting 13 cases were retrospectively analyzed in regard to the recognition of endolymphatic hydrops in the cochlea and vestibule and to the correlation between the patients' symptoms and endolymphatic hydrops. In all cases, vestibular endolymphatic space can be recognized on non-contrast-enhanced 3D FLAIR. Cochlear endolymphatic space can be identified only in one case with significant hydrops. Vestibular hydrops was identified in four cases. Among these four cases, three had vertigo, and one had no vertigo. In those nine cases without hydrops, two had vertigo, and seven did not have vertigo. No significant correlation between vertigo and vestibular hydrops was found. Vestibular endolymphatic space can be recognized on non-contrast-enhanced 3D FLAIR. In some patients with VS, vestibular hydrops is seen; however, endolymphatic hydrops in the vestibule might not be the only responsible cause of vertigo in the patients with VS. (orig.)

  7. Endolympathic hydrops in patients with vestibular schwannoma: visualization by non-contrast-enhanced 3D FLAIR

    Energy Technology Data Exchange (ETDEWEB)

    Naganawa, Shinji; Kawai, Hisashi [Nagoya University Graduate School of Medicine, Department of Radiology, Nagoya (Japan); Sone, Michihiko; Nakashima, Tsutomu [Nagoya University Graduate School of Medicine, Department of Otorhinolaryngology, Nagoya (Japan); Ikeda, Mitsuru [Nagoya University School of Health Sciences, Department of Radiological Technology, Nagoya (Japan)

    2011-12-15

    Signal intensity of ipsilateral labyrinthine lymph fluid has been reported to increase in most cases with vestibular schwannoma (VS) on 3D fluid attenuated inversion recovery (FLAIR). The purpose of this study was twofold, (1) to evaluate if endolymphatic space can be recognized in the patients with VS on non-contrast-enhanced 3D-FLAIR images and (2) to know if the vertigo in the patients with VS correlates to vestibular endolymphatic hydrops. From the introduction of 32-channel head coil at 3 T in May 2008 to June 2010, 15 cases with unilateral VS were identified in the radiology report database. The two cases without a significant signal increase on 3D FLAIR were excluded. Resting 13 cases were retrospectively analyzed in regard to the recognition of endolymphatic hydrops in the cochlea and vestibule and to the correlation between the patients' symptoms and endolymphatic hydrops. In all cases, vestibular endolymphatic space can be recognized on non-contrast-enhanced 3D FLAIR. Cochlear endolymphatic space can be identified only in one case with significant hydrops. Vestibular hydrops was identified in four cases. Among these four cases, three had vertigo, and one had no vertigo. In those nine cases without hydrops, two had vertigo, and seven did not have vertigo. No significant correlation between vertigo and vestibular hydrops was found. Vestibular endolymphatic space can be recognized on non-contrast-enhanced 3D FLAIR. In some patients with VS, vestibular hydrops is seen; however, endolymphatic hydrops in the vestibule might not be the only responsible cause of vertigo in the patients with VS. (orig.)

  8. Poster: Observing change in crowded data sets in 3D space - Visualizing gene expression in human tissues

    KAUST Repository

    Rogowski, Marcin

    2013-03-01

    We have been confronted with a real-world problem of visualizing and observing change of gene expression between different human tissues. In this paper, we are presenting a universal representation space based on two-dimensional gel electrophoresis as opposed to force-directed layouts encountered most often in similar problems. We are discussing the methods we devised to make observing change more convenient in a 3D virtual reality environment. © 2013 IEEE.

  9. Innovative Ultrasonic Testing (UT) of nuclear components by sampling phased array with 3D visualization of inspection results

    OpenAIRE

    Pudovikov, Sergey; Bulavinov, Andrey; Pinchuk, Roman

    2011-01-01

    Unlike other industrial branches, nuclear industry - when performing UT- is not only asking for a reliable detection, but also for an exact sizing of material defects. Under these objectives ultrasonic imaging plays an important role in practical testing of nuclear components in the data evaluation process as well as for documentation of the inspection results. 2D and 3D sound-field steering by means of phased array technology offers great opportunities for spatially correct visualization of ...

  10. Fast 3D seismic wave simulations of 24 August 2016 Mw 6.0 central Italy earthquake for visual communication

    Directory of Open Access Journals (Sweden)

    Emanuele Casarotti

    2016-12-01

    Full Text Available We present here the first application of the fast reacting framework for 3D simulations of seismic wave propagation generated by earthquakes in the Italian region with magnitude Mw 5. The driven motivation is to offer a visualization of the natural phenomenon to the general public but also to provide preliminary modeling to expert and civil protection operators. We report here a description of this framework during the emergency of 24 August 2016 Mw 6.0 central Italy Earthquake, a discussion on the accuracy of the simulation for this seismic event and a preliminary critical analysis of the visualization structure and of the reaction of the public.

  11. Keeping a large-pupilled eye on high-level visual processing.

    Science.gov (United States)

    Binda, Paola; Murray, Scott O

    2015-01-01

    The pupillary light response has long been considered an elementary reflex. However, evidence now shows that it integrates information from such complex phenomena as attention, contextual processing, and imagery. These discoveries make pupillometry a promising tool for an entirely new application: the study of high-level vision. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. 3D Room Visualization on Android Based Mobile Device (with Philips™’ Surround Sound Music Player

    Directory of Open Access Journals (Sweden)

    Durio Etgar

    2012-12-01

    Full Text Available This project’s specifically purposed as a demo application, so anyone can get the experience of a surround audio room without having to physically involved to it, with a main idea of generating a 3D surround sound room scenery coupled with surround sound in a handier package, namely, a “Virtual Listen Room”. Virtual Listen Room set a foundation of an innovative visualization that later will be developed and released as one of way of portable advertisement. This application was built inside of Android environment. Android device had been chosen as the implementation target, since it leaves massive development spaces and mostly contains essential components needed on this project, including graphic processor unit (GPU.  Graphic manipulation can be done using an embedded programming interface called OpenGL ES, which is planted in all Android devices generally. Further, Android has a Accelerometer Sensor that is needed to be coupled with scene to produce a dynamic movement of the camera. Surround sound effect can be reached with a decoder from Phillips called MPEG Surround Sound Decoder. To sum the whole project, we got an application with sensor-dynamic 3D room visualization coupled with Philips’ Surround Sound Music Player. We can manipulate several room’s properties; Subwoofer location, Room light, and how many speakers inside it, the application itself works well despite facing several performance problems before, later to be solved. [Keywords : Android,Visualization,Open GL; ES; 3D; Surround Sensor

  13. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    Science.gov (United States)

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  14. 3D visualization of two-phase flow in the micro-tube by a simple but effective method

    International Nuclear Information System (INIS)

    Fu, X; Zhang, P; Hu, H; Huang, C J; Huang, Y; Wang, R Z

    2009-01-01

    The present study provides a simple but effective method for 3D visualization of the two-phase flow in the micro-tube. An isosceles right-angle prism combined with a mirror located 45° bevel to the prism is employed to synchronously obtain the front and side views of the flow patterns with a single camera, where the locations of the prism and the micro-tube for clear imaging should satisfy a fixed relationship which is specified in the present study. The optical design is proven successfully by the tough visualization work at the cryogenic temperature range. The image deformation due to the refraction and geometrical configuration of the test section is quantitatively investigated. It is calculated that the image is enlarged by about 20% in inner diameter compared to the real object, which is validated by the experimental results. Meanwhile, the image deformation by adding a rectangular optical correction box outside the circular tube is comparatively investigated. It is calculated that the image is reduced by about 20% in inner diameter with a rectangular optical correction box compared to the real object. The 3D re-construction process based on the two views is conducted through three steps, which shows that the 3D visualization method can easily be applied for two-phase flow research in micro-scale channels and improves the measurement accuracy of some important parameters of the two-phase flow such as void fraction, spatial distribution of bubbles, etc

  15. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Directory of Open Access Journals (Sweden)

    S. Gonizzi Barsanti

    2015-08-01

    Full Text Available Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  16. Thunderstorms in my computer : The effect of visual dynamics and sound in a 3D environment

    NARCIS (Netherlands)

    Houtkamp, J.; Schuurink, E.L.; Toet, A.

    2008-01-01

    We assessed the effects of the addition of dynamic visual elements and sounds to a levee patroller training game on the appraisal of the environment and weather conditions, the engagement of the users and their performance. Results show that the combination of visual dynamics and sounds best conveys

  17. Based on matlab 3d visualization programming in the application of the uranium exploration

    International Nuclear Information System (INIS)

    Qi Jianquan

    2012-01-01

    Combined geological theory, geophysical curve and Matlab programming three dimensional visualization applied to the production of uranium exploration. With a simple Matlab programming, numerical processing and graphical visualization of convenient features, and effective in identifying ore bodies, recourse to ore, ore body delineation of the scope of analysis has played the role of sedimentary environment. (author)

  18. Augmented reality system for oral surgery using 3D auto stereoscopic visualization.

    Science.gov (United States)

    Tran, Huy Hoang; Suenaga, Hideyuki; Kuwana, Kenta; Masamune, Ken; Dohi, Takeyoshi; Nakajima, Susumu; Liao, Hongen

    2011-01-01

    We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.

  19. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  20. 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.

    Science.gov (United States)

    Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang

    2017-07-06

    We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

  1. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  2. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    Science.gov (United States)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether

  3. RVA. 3-D Visualization and Analysis Software to Support Management of Oil and Gas Resources

    Energy Technology Data Exchange (ETDEWEB)

    Keefer, Donald A. [Univ. of Illinois, Champaign, IL (United States); Shaffer, Eric G. [Univ. of Illinois, Champaign, IL (United States); Storsved, Brynne [Univ. of Illinois, Champaign, IL (United States); Vanmoer, Mark [Univ. of Illinois, Champaign, IL (United States); Angrave, Lawrence [Univ. of Illinois, Champaign, IL (United States); Damico, James R. [Univ. of Illinois, Champaign, IL (United States); Grigsby, Nathan [Univ. of Illinois, Champaign, IL (United States)

    2015-12-01

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including

  4. Heterogeneity phantoms for visualization of 3D dose distributions by MRI-based polymer gel dosimetry

    International Nuclear Information System (INIS)

    Watanabe, Yoichi; Mooij, Rob; Mark Perera, G.; Maryanski, Marek J.

    2004-01-01

    Heterogeneity corrections in dose calculations are necessary for radiation therapy treatment plans. Dosimetric measurements of the heterogeneity effects are hampered if the detectors are large and their radiological characteristics are not equivalent to water. Gel dosimetry can solve these problems. Furthermore, it provides three-dimensional (3D) dose distributions. We used a cylindrical phantom filled with BANG-3 registered polymer gel to measure 3D dose distributions in heterogeneous media. The phantom has a cavity, in which water-equivalent or bone-like solid blocks can be inserted. The irradiated phantom was scanned with an magnetic resonance imaging (MRI) scanner. Dose distributions were obtained by calibrating the polymer gel for a relationship between the absorbed dose and the spin-spin relaxation rate of the magnetic resistance (MR) signal. To study dose distributions we had to analyze MR imaging artifacts. This was done in three ways: comparison of a measured dose distribution in a simulated homogeneous phantom with a reference dose distribution, comparison of a sagittally scanned image with a sagittal image reconstructed from axially scanned data, and coregistration of MR and computed-tomography images. We found that the MRI artifacts cause a geometrical distortion of less than 2 mm and less than 10% change in the dose around solid inserts. With these limitations in mind we could make some qualitative measurements. Particularly we observed clear differences between the measured dose distributions around an air-gap and around bone-like material for a 6 MV photon beam. In conclusion, the gel dosimetry has the potential to qualitatively characterize the dose distributions near heterogeneities in 3D

  5. 3D visualization of ultra-fine ICON climate simulation data

    Science.gov (United States)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  6. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    Science.gov (United States)

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Value of PET/CT 3D visualization of head and neck squamous cell carcinoma extended to mandible.

    Science.gov (United States)

    Lopez, R; Gantet, P; Julian, A; Hitzel, A; Herbault-Barres, B; Alshehri, S; Payoux, P

    2018-05-01

    To study an original 3D visualization of head and neck squamous cell carcinoma extending to the mandible by using [18F]-NaF PET/CT and [18F]-FDG PET/CT imaging along with a new innovative FDG and NaF image analysis using dedicated software. The main interest of the 3D evaluation is to have a better visualization of bone extension in such cancers and that could also avoid unsatisfying surgical treatment later on. A prospective study was carried out from November 2016 to September 2017. Twenty patients with head and neck squamous cell carcinoma extending to the mandible (stage 4 in the UICC classification) underwent [18F]-NaF and [18F]-FDG PET/CT. We compared the delineation of 3D quantification obtained with [18F]-NaF and [18F]-FDG PET/CT. In order to carry out this comparison, a method of visualisation and quantification of PET images was developed. This new approach was based on a process of quantification of radioactive activity within the mandibular bone that objectively defined the significant limits of this activity on PET images and on a 3D visualization. Furthermore, the spatial limits obtained by analysis of the PET/CT 3D images were compared to those obtained by histopathological examination of mandibular resection which confirmed intraosseous extension to the mandible. The [18F]-NaF PET/CT imaging confirmed the mandibular extension in 85% of cases and was not shown in [18F]-FDG PET/CT imaging. The [18F]-NaF PET/CT was significantly more accurate than [18F]-FDG PET/CT in 3D assessment of intraosseous extension of head and neck squamous cell carcinoma. This new 3D information shows the importance in the imaging approach of cancers. All cases of mandibular extension suspected on [18F]-NaF PET/CT imaging were confirmed based on histopathological results as a reference. The [18F]-NaF PET/CT 3D visualization should be included in the pre-treatment workups of head and neck cancers. With the use of a dedicated software which enables objective delineation of

  8. Method of surface error visualization using laser 3D projection technology

    Science.gov (United States)

    Guo, Lili; Li, Lijuan; Lin, Xuezhu

    2017-10-01

    In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.

  9. On the Benefits of Using Constant Visual Angle Glyphs in Interactive Exploration of 3D Scatterplots

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    2014-01-01

    structures. Furthermore, we introduce a new approach to glyph visualization—constant visual angle (CVA) glyphs—which has the potential to mitigate the effect of clutter at the cost of dispensing with the common real-world depth cue of relative size. In a controlled experiment where test subjects had...... to locate and select visualized structures in an immersive virtual environment, we identified several significant results. One result is that CVA glyphs ease perception of structures in cluttered environments while not deteriorating it when clutter is absent. Another is the existence of threshold densities...

  10. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    Science.gov (United States)

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  11. 3D visualization of mold filling stages in thermal nanoimprint by white light interferometry and atomic force microscopy

    International Nuclear Information System (INIS)

    Schift, Helmut; Gobrecht, Jens; Kim, Geehong; Lee, Jaejong

    2009-01-01

    A method for continuous 3D visualization of the mold filling at a microscopic level during a thermoplastic nanoimprint process was developed. It is based on superposition of micrographs of a series of different stages of imprint. It was applied to two common 3D microscopies with different resolution limitations. Due to advanced image processing, the animated movie sequence, available as supplementary multimedia information in the online version of this journal, gives an unprecedented insight into the complex polymer flow and shows how voids are forming and vanishing during the imprint process around micropillars. The method has advantages over current real-time methods and can be used as an analytical tool for optimization of processes and improvement of stamp design down to the sub-10 nm nanometer range.

  12. Between the Real and the Virtual: 3D visualization in the Cultural Heritage domain - expectations and prospects

    Directory of Open Access Journals (Sweden)

    Sorin Hermon

    2011-05-01

    Full Text Available The paper discusses two uses of 3D Visualization and Virtual Reality (hereafter VR of Cultural Heritage (CH assets: a less used one, in the archaeological / historical research and a more frequent one, as a communication medium in CH museums. While technological effort has been mainly invested in improving the “accuracy” of VR (determined as how truthfully it reproduces the “CH reality”, issues related to scientific requirements, (data transparency, separation between “real” and “virtual”, etc., are largely neglected, or at least not directly related to the 3D outcome, which may explain why, after more than twenty years of producing VR models, they are still rarely used in the archaeological research. The paper will present a proposal for developing VR tools as such as to be meaningful CH research tools as well as a methodology for designing VR outcomes to be used as a communication medium in CH museums.

  13. Volume based DCE-MRI breast cancer detection with 3D visualization system

    International Nuclear Information System (INIS)

    Chia, F.K.; Sim, K.S.; Chong, S.S.; Tan, S.T.; Ting, H.Y.; Abbas, S.F.; Omar, S.

    2011-01-01

    In this paper, a computer aided design auto probing system is presented to detect breast lesions based on Dynamic contrast enhanced Magnetic resonance imaging (DCE-MRI) images. The system is proposed in order to aid the radiologists and doctors in the interpretation of MRI breast images and enhance the detection accuracy. A series of approaches are presented to enhance the detection accuracy and refine the breast region of interest (Roil) automatically. Besides, a semi-quantitative analysis is used to segment the breast lesions from selected breast Roil and classify the detected tumour is whether benign, suspicious or malignant. The entire breast Roil including the detected tumour will display in 3D. The methodology has been applied on 104 sets of digital imaging and communications in medicine (Dico) breast MRI datasets images. The biopsy results are verified by 2 radiologists from Hospital Malaysia. The experimental results are demonstrated the proposed scheme can precisely identify breast cancer regions with 93% accuracy. (author)

  14. Visualization of spatial-temporal data based on 3D virtual scene

    Science.gov (United States)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  15. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    Science.gov (United States)

    2017-08-01

    pvOSPRay real- time rendering capability is a crucial component in our workflow. Although Approved for public release; distribution is unlimited. 7...Data Visualization by Simon Su and Luis Bravo Approved for public release; distribution is unlimited. NOTICES...Directorate, ARL by Luis Bravo Vehicle Technology Directorate, ARL Approved for public release; distribution is unlimited. ii

  16. SlicerAstro : A 3-D interactive visual analytics tool for HI data

    NARCIS (Netherlands)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Fillion-Robin, J. C.; Yu, L.

    SKA precursors are capable of detecting hundreds of galaxies in HI in a single 12 h pointing. In deeper surveys one will probe more easily faint HI structures, typically located in the vicinity of galaxies, such as tails, filaments, and extraplanar gas. The importance of interactive visualization in

  17. Virtual teeth: a 3D method for editing and visualizing small structures in CT scans

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Larsen, Per; Kreiborg, Sven

    1996-01-01

    The paper presents an interactive method for segmentation and visualization of small structures in CT scans. A combination of isosurface generation, spatial region growing and interactive graphics tools are used to extract small structures interactively. A practical example of segmentation of the...

  18. Examining the Conceptual Understandings of Geoscience Concepts of Students with Visual Impairments: Implications of 3-D Printing

    Science.gov (United States)

    Koehler, Karen E.

    The purpose of this qualitative study was to explore the use of 3-D printed models as an instructional tool in a middle school science classroom for students with visual impairments and compare their use to traditional tactile graphics for aiding conceptual understanding of geoscience concepts. Specifically, this study examined if the students' conceptual understanding of plate tectonics was different when 3-D printed objects were used versus traditional tactile graphics and explored the misconceptions held by students with visual impairments related to plate tectonics and associated geoscience concepts. Interview data was collected one week prior to instruction and one week after instruction and throughout the 3-week instructional period and additional ata sources included student journals, other student documents and audio taped instructional sessions. All students in the middle school classroom received instruction on plate tectonics using the same inquiry-based curriculum but during different time periods of the day. One group of students, the 3D group, had access to 3-D printed models illustrating specific geoscience concepts and the group of students, the TG group, had access to tactile graphics illustrating the same geoscience concepts. The videotaped pre and post interviews were transcribed, analyzed and coded for conceptual understanding using constant comparative analysis and to uncover student misconceptions. All student responses to the interview questions were categorized in terms of conceptual understanding. Analysis of student journals and classroom talk served to uncover student mental models and misconceptions about plate tectonics and associated geoscience concepts to measure conceptual understanding. A slight majority of the conceptual understanding before instruction was categorized as no understanding or alternative understanding and after instruction the larger majority of conceptual understanding was categorized as scientific or scientific

  19. Fall Prevention Self-Assessments Via Mobile 3D Visualization Technologies: Community Dwelling Older Adults' Perceptions of Opportunities and Challenges.

    Science.gov (United States)

    Hamm, Julian; Money, Arthur; Atwal, Anita

    2017-06-19

    In the field of occupational therapy, the assistive equipment provision process (AEPP) is a prominent preventive strategy used to promote independent living and to identify and alleviate fall risk factors via the provision of assistive equipment within the home environment. Current practice involves the use of paper-based forms that include 2D measurement guidance diagrams that aim to communicate the precise points and dimensions that must be measured in order to make AEPP assessments. There are, however, issues such as "poor fit" of equipment due to inaccurate measurements taken and recorded, resulting in more than 50% of equipment installed within the home being abandoned by patients. This paper presents a novel 3D measurement aid prototype (3D-MAP) that provides enhanced measurement and assessment guidance to patients via the use of 3D visualization technologies. The purpose of this study was to explore the perceptions of older adults with regard to the barriers and opportunities of using the 3D-MAP application as a tool that enables patient self-delivery of the AEPP. Thirty-three community-dwelling older adults participated in interactive sessions with a bespoke 3D-MAP application utilizing the retrospective think-aloud protocol and semistructured focus group discussions. The system usability scale (SUS) questionnaire was used to evaluate the application's usability. Thematic template analysis was carried out on the SUS item discussions, think-aloud, and semistructured focus group data. The quantitative SUS results revealed that the application may be described as having "marginal-high" and "good" levels of usability, along with strong agreement with items relating to the usability (P=.004) and learnability (Putility with regards to effectiveness, efficiency, accuracy, and reliability of measurements that are recorded using the application and to compare it with 2D measurement guidance leaflets. ©Julian Hamm, Arthur Money, Anita Atwal. Originally published in

  20. Functional outcomes following lesions in visual cortex: Implications for plasticity of high-level vision.

    Science.gov (United States)

    Liu, Tina T; Behrmann, Marlene

    2017-10-01

    Understanding the nature and extent of neural plasticity in humans remains a key challenge for neuroscience. Importantly, however, a precise characterization of plasticity and its underlying mechanism has the potential to enable new approaches for enhancing reorganization of cortical function. Investigations of the impairment and subsequent recovery of cognitive and perceptual functions following early-onset cortical lesions in humans provide a unique opportunity to elucidate how the brain changes, adapts, and reorganizes. Specifically, here, we focus on restitution of visual function, and we review the findings on plasticity and re-organization of the ventral occipital temporal cortex (VOTC) in published reports of 46 patients with a lesion to or resection of the visual cortex early in life. Findings reveal that a lesion to the VOTC results in a deficit that affects the visual recognition of more than one category of stimuli (faces, objects and words). In addition, the majority of pediatric patients show limited recovery over time, especially those in whom deficits in low-level vision also persist. Last, given that neither the equipotentiality nor the modularity view on plasticity was clearly supported, we suggest some intermediate possibilities in which some plasticity may be evident but that this might depend on the area that was affected, its maturational trajectory as well as its structural and functional connectivity constraints. Finally, we offer suggestions for future research that can elucidate plasticity further. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. On the future of 3-D visualization in non-medical industrial x-ray computed tomography

    International Nuclear Information System (INIS)

    Wells, J.M.

    2004-01-01

    The purpose of imaging is to capture and record the details of an object for both current and future analysis in a transportable and archival format. Generally, the development and understanding of the relationships of the features of interest thus revealed in the image is ultimately essential for the beneficial utilization of that that knowledge. Modern advanced imaging methods utilized in both medical and industrial applications are predominantly of a digital format, and increasingly moving from a 2-D to 3-D modality to allow for significantly improved detail resolution and clarity of volumetric visualization. Conventional digital radiography (DR), for example, compresses an entire object volume onto a 2-D planar image with consequent lack of spatial resolution and considerable loss of small volume feature resolution. Computed tomography (CT) overcomes both of these limitations, providing the highly desirable capability of precise 3-D detection, localization and characterization of multiple features throughout the subject object volume. CT has the further capability to reconstruct virtual 3-D solid object images with arbitrary and reversible planar sectioning and of variable transparency to clearly visualize features of different densities in situ within an otherwise opaque object. While tomographic imaging is utilized in various medical CT, MRI, PET, EBCT and 3-D Ultrasound modalities, only the X-ray CT imaging is briefly discussed here as it presents comparable high quality images and is quite similar and synergistic with industrial XCT. Medical CT procedures started in the late 1970's (originally known as CAT Scan) and have progressed to the extent of being experienced and accepted by much of the general population. Non-Medical CT (or Industrial XCT) technology has historically followed in the shadow of Medical CT but remains today considerably less pervasive. There are however increasingly several important equipment and application distinctions. These will

  2. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Directory of Open Access Journals (Sweden)

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  3. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    International Nuclear Information System (INIS)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodriguez, A. O.

    2006-01-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases

  4. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    Science.gov (United States)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodríguez, A. O.

    2006-09-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  5. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2011-03-30

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.

  6. Automatic delineation and 3D visualization of the human ventricular system using probabilistic neural networks

    Science.gov (United States)

    Hatfield, Fraser N.; Dehmeshki, Jamshid

    1998-09-01

    Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.

  7. Generic Space Science Visualization in 2D/3D using SDDAS

    Science.gov (United States)

    Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.

    2017-12-01

    The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.

  8. 3D PATTERN OF BRAIN ABNORMALITIES IN WILLIAMS SYNDROME VISUALIZED USING TENSOR-BASED MORPHOMETRY

    OpenAIRE

    Chiang, Ming-Chang; Reiss, Allan L.; Lee, Agatha D.; Bellugi, Ursula; Galaburda, Albert M.; Korenberg, Julie R.; Mills, Debra L.; Toga, Arthur W.; Thompson, Paul M.

    2007-01-01

    Williams syndrome (WS) is a neurodevelopmental disorder associated with deletion of ~20 contiguous genes in chromosome band 7q11.23. Individuals with WS exhibit mild to moderate mental retardation, but are relatively more proficient in specific language and musical abilities. We used tensor-based morphometry (TBM) to visualize the complex pattern of gray/white matter reductions in WS, based on fluid registration of structural brain images.

  9. A browser-based 3D Visualization Tool designed for comparing CERES/CALIOP/CloudSAT level-2 data sets.

    Science.gov (United States)

    Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Doelling, D. R.

    2017-12-01

    In Langley NASA, Clouds and the Earth's Radiant Energy System (CERES) and Moderate Resolution Imaging Spectroradiometer (MODIS) are merged with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat Cloud Profiling Radar (CPR). The CERES merged product (C3M) matches up to three CALIPSO footprints with each MODIS pixel along its ground track. It then assigns the nearest CloudSat footprint to each of those MODIS pixels. The cloud properties from MODIS, retrieved using the CERES algorithms, are included in C3M with the matched CALIPSO and CloudSat products along with radiances from 18 MODIS channels. The dataset is used to validate the CERES retrieved MODIS cloud properties and the computed TOA and surface flux difference using MODIS or CALIOP/CloudSAT retrieved clouds. This information is then used to tune the computed fluxes to match the CERES observed TOA flux. A visualization tool will be invaluable to determine the cause of these large cloud and flux differences in order to improve the methodology. This effort is part of larger effort to allow users to order the CERES C3M product sub-setted by time and parameter as well as the previously mentioned visualization capabilities. This presentation will show a new graphical 3D-interface, 3D-CERESVis, that allows users to view both passive remote sensing satellites (MODIS and CERES) and active satellites (CALIPSO and CloudSat), such that the detailed vertical structures of cloud properties from CALIPSO and CloudSat are displayed side by side with horizontally retrieved cloud properties from MODIS and CERES. Similarly, the CERES computed profile fluxes whether using MODIS or CALIPSO and CloudSat clouds can also be compared. 3D-CERESVis is a browser-based visualization tool that makes uses of techniques such as multiple synchronized cursors, COLLADA format data and Cesium.

  10. Hubble Goes IMAX: 3D Visualization of the GOODS Southern Field for a Large Format Short Film

    Science.gov (United States)

    Summers, F. J.; Stoke, J. M.; Albert, L. J.; Bacon, G. T.; Barranger, C. L.; Feild, A. R.; Frattare, L. M.; Godfrey, J. P.; Levay, Z. G.; Preston, B. S.; Fletcher, L. M.; GOODS Team

    2003-12-01

    The Office of Public Outreach at the Space Telescope Science Institute is producing a several minute IMAX film that will have its world premiere at the January 2004 AAS meeting. The film explores the rich tapestry of galaxies in the GOODS Survey Southern Field in both two and three dimensions. This poster describes the visualization efforts from FITS files through the galaxy processing pipeline to 3D modelling and the rendering of approximately 100 billion pixels. The IMAX film will be shown at a special session at Fernbank Science Center, and the video will be shown at the STScI booth.

  11. Visual navigation of the UAVs on the basis of 3D natural landmarks

    Science.gov (United States)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  12. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    Science.gov (United States)

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  13. Computational Topology Counterexamples with 3D Visualization of Bézier Curves

    Directory of Open Access Journals (Sweden)

    J. Li

    2012-10-01

    Full Text Available For applications in computing, Bézier curves are pervasive and are defined by a piecewise linear curve L which is embedded in R3 and yields a smooth polynomial curve C embedded in R3. It is of interest to understand when L and C have the same embeddings. One class ofc ounterexamples is shown for L being unknotted, while C is knotted. Another class of counterexamples is created where L is equilateral and simple, while C is self-intersecting. These counterexamples were discovered using curve visualizing software and numerical algorithms that produce general procedures to create more examples.

  14. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    Science.gov (United States)

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.

  15. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  16. 3D visualization and finite element mesh formation from wood anatomy samples, Part II – Algorithm approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh

  17. 3D visualization of subcellular structures of Schizosaccharomyces pombe by hard X-ray tomography.

    Science.gov (United States)

    Yang, Y; Li, W; Liu, G; Zhang, X; Chen, J; Wu, W; Guan, Y; Xiong, Y; Tian, Y; Wu, Z

    2010-10-01

    Cellular structures of the fission yeast, Schizosaccharomyces pombe, were examined by using hard X-ray tomography. Since cells are nearly transparent to hard X-rays, Zernike phase contrast and heavy metal staining were introduced to improve image contrast. Through using such methods, images taken at 8 keV displayed sufficient contrast for observing cellular structures. The cell wall, the intracellular organelles and the entire structural organization of the whole cells were visualized in three-dimensional at a resolution better than 100 nm. Comparison between phase contrast and absorption contrast was also made, indicating the obvious advantage of phase contrast for cellular imaging at this energy. Our results demonstrate that hard X-ray tomography with Zernike phase contrast is suitable for cellular imaging. Its unique abilities make it have potential to become a useful tool for revealing structural information from cells, especially thick eukaryotic cells. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.

  18. 3D Online Visualization and Synergy of NASA A-Train Data Using Google Earth

    Science.gov (United States)

    Chen, Aijun; Kempler, Steven; Leptoukh, Gregory; Smith, Peter

    2010-01-01

    This poster presentation reviews the use of Google Earth to assist in three dimensional online visualization of NASA Earth science and geospatial data. The NASA A-Train satellite constellation is a succession of seven sun-synchronous orbit satellites: (1) OCO-2 (Orbiting Carbon Observatory) (will launch in Feb. 2013), (2) GCOM-W1 (Global Change Observation Mission), (3) Aqua, (4) CloudSat, (5) CALIPSO (Cloud-Aerosol Lidar & Infrared Pathfinder Satellite Observations), (6) Glory, (7) Aura. The A-Train makes possible synergy of information from multiple resources, so more information about earth condition is obtained from the combined observations than would be possible from the sum of the observations taken independently

  19. Visual Understanding of Light Absorption and Waveguiding in Standing Nanowires with 3D Fluorescence Confocal Microscopy.

    Science.gov (United States)

    Frederiksen, Rune; Tutuncuoglu, Gozde; Matteini, Federico; Martinez, Karen L; Fontcuberta I Morral, Anna; Alarcon-Llado, Esther

    2017-09-20

    Semiconductor nanowires are promising building blocks for next-generation photonics. Indirect proofs of large absorption cross sections have been reported in nanostructures with subwavelength diameters, an effect that is even more prominent in vertically standing nanowires. In this work we provide a three-dimensional map of the light around vertical GaAs nanowires standing on a substrate by using fluorescence confocal microscopy, where the strong long-range disruption of the light path along the nanowire is illustrated. We find that the actual long-distance perturbation is much larger in size than calculated extinction cross sections. While the size of the perturbation remains similar, the intensity of the interaction changes dramatically over the visible spectrum. Numerical simulations allow us to distinguish the effects of scattering and absorption in the nanowire leading to these phenomena. This work provides a visual understanding of light absorption in semiconductor nanowire structures, which is of high interest for solar energy conversion applications.

  20. Method for visualization and presentation of priceless old prints based on precise 3D scan

    Science.gov (United States)

    Bunsch, Eryk; Sitnik, Robert

    2014-02-01

    Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.

  1. 3D Room Visualization on Android Based Mobile Device (with Philips™’ Surround Sound Music Player

    Directory of Open Access Journals (Sweden)

    Durio Etgar

    2013-01-01

    Full Text Available This project’s specifically purposed as a demo application, so anyone can get the experience of a surround audio room without having to physically involved to it, with a main idea of generating a 3D surround sound room scenery coupled with surround sound in a handier package, namely, a “Virtual Listen Room”. Virtual Listen Room set a foundation of an innovative visualization that later will be developed and released as one of way of portable advertisement. This application was built inside of Android environment. Android device had been chosen as the implementation target, since it leaves massive development spaces and mostly contains essential components needed on this project, including graphic processor unit (GPU. Graphic manipulation can be done using an embedded programming interface called OpenGL ES, which is planted in all Android devices generally. Further, Android has a Accelerometer Sensor that is needed to be coupled with scene to produce a dynamic movement of the camera. Surround sound effect can be reached with a decoder from Phillips called MPEG Surround Sound Decoder. To sum the whole project, we got an application with sensor-dynamic 3D room visualization coupled with Philips’ Surround Sound Music Player. We can manipulate several room’s properties; Subwoofer location, Room light, and how many speakers inside it, the application itself works well despite facing several performance problems before, later to be solved.

  2. Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

    Science.gov (United States)

    Marques, Luís.; Roca Cladera, Josep; Tenedório, José António

    2017-10-01

    The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.

  3. Recording, Visualization and Documentation of 3D Spatial Data for Monitoring Topography in Areas of Cultural Heritage

    Science.gov (United States)

    Maravelakis, Emmanouel; Konstantaras, Antonios; Axaridou, Anastasia; Chrysakis, Ioannis; Xinogalos, Michalis

    2014-05-01

    . allowing them to interchange their knowledge, findings and observations at different time frames. Results outline the successful application of the above systems in certain Greek areas of important cultural heritage [3,11] were significant efforts are being made for their preservation through time. Acknowledgement The authors wish to thank the General Secretariat for Research and Technology of Ministry of Education and Religious Affairs, Culture and Sports in Greece for their financial support via program Cooperation: Partnership of Production and Research Institutions in Small and Medium Scale Projects, Project Title: "3D-SYSTEK - Development of a novel system for 3D Documentation, Promotion and Exploitation of Cultural Heritage Monuments via 3D data acquisition, 3D modeling and metadata recording". Keywords spatial data, land degradation monitoring, 3D modeling and visualization, terrestrial laser scanning, documentation and metadata repository, protection of cultural heritage References [1] Shalaby, A., and Tateishi, R.: Remote sensing and GIS for mapping and monitoring land cover and land-use changes in the northwestern coastal zone of egypt. Applied Geography, 27(1), 28-41, (2007) [2] Poesen, J. W. A., and Hooke, J. M.: Erosion, flooding and channel management in mediterranean environments of southern europe. Progress in Physical Geography, 21(2), 157-199, (1997) [3] Maravelakis, E., Bilalis, N., Mantzorou, I., Konstantaras, A., Antoniadis, A.: 3D modeling of the oldest olive tree of the world. IJCER 2(2), 340-347 (2012) [4] Manferdini, A.M., Remondino, F.: Reality-Based 3D Modeling, Segmentation and Web- Based Visualization. In: Ioannides, M., Fellner, D., Georgopoulos, A., Hadjimitsis, D.G. (eds.) EuroMed 2010. LNCS, vol. 6436, pp. 110-124. Springer, Heidelberg (2010) [5] Tapete, D., Casagli, N., Luzi, G., Fanti, R., Gigli, G., Leva, D.: Integrating radar and laserbased remote sensing techniques for monitoring structural deformation of archaeological monuments

  4. Acoustic position finding of partial discharges in transformers. Combination of partial discharge measurement technology with 3D visualization; Akustische Ortung von Teilentladungen in Transformatoren. TE-Messtechnik und 3-D-Visualisierung kombiniert

    Energy Technology Data Exchange (ETDEWEB)

    Kraetge, Alexander; Hoek, Stefan [Omicron Electronics GmbH, Klaus (Austria)

    2013-11-01

    A new measuring system facilitates the detection of partial discharges in transformers by means of the fully synchronous combination of measurement technology for electrical partial discharges with intuitive 3D visualization of the test object. The contribution under consideration describes the application of this system with examples from the measurement practice.

  5. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    Science.gov (United States)

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  6. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    Directory of Open Access Journals (Sweden)

    Hamza Alzarok

    2017-01-01

    Full Text Available The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT. Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the

  7. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    Directory of Open Access Journals (Sweden)

    Wilbert A. McClay

    2015-09-01

    Full Text Available Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  8. Aorta cross-section calculation and 3D visualization from CT or MRT data using VRML

    Science.gov (United States)

    Grabner, Guenther; Modritsch, Robert; Stiegmaier, Wolfgang; Grasser, Simon; Klinger, Thomas

    2005-04-01

    Quantification of vessel diameters of artherosclerotic or congenital stenosis is very important for the diagnosis of vascular diseases. The aorta extraction and cross-section calculation is a software-based application that offers a three-dimensional, platform-independent, colorized visualization of the extracted aorta with augmented reality information of MRT or CT datasets. This project is based on different types of specialized image processing algorithms, dynamical particle filtering and complex mathematical equations. From this three-dimensional model a calculation of minimal cross sections is performed. In user specified distances, the aorta is cut in differently defined directions which are created through vectors with varying length. The extracted aorta and the derived minimal cross-sections are then rendered with the marching cube algorithm and represented together in a three-dimensional virtual reality with a very high degree of immersion. The aim of this study was to develop an imaging software that delivers cardiologists the possibility of (i) furnishing fast vascular diagnosis, (ii) getting precise diameter information, (iii) being able to process exact, local stenosis detection (iv) having permanent data storing and easy access to former datasets, and (v) reliable documentation of results in form of tables and graphical printouts.

  9. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    Science.gov (United States)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  10. Development and Analysis of New 3D Tactile Materials for the Enhancement of STEM Education for the Blind and Visually Impaired

    Science.gov (United States)

    Gonzales, Ashleigh

    Blind and visually impaired individuals have historically demonstrated a low participation in the fields of science, engineering, mathematics, and technology (STEM). This low participation is reflected in both their education and career choices. Despite the establishment of the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA), blind and visually impaired (BVI) students continue to academically fall below the level of their sighted peers in the areas of science and math. Although this deficit is created by many factors, this study focuses on the lack of adequate accessible image based materials. Traditional methods for creating accessible image materials for the vision impaired have included detailed verbal descriptions accompanying an image or conversion into a simplified tactile graphic. It is very common that no substitute materials will be provided to students within STEM courses because they are image rich disciplines and often include a large number images, diagrams and charts. Additionally, images that are translated into text or simplified into basic line drawings are frequently inadequate because they rely on the interpretations of resource personnel who do not have expertise in STEM. Within this study, a method to create a new type of tactile 3D image was developed using High Density Polyethylene (HDPE) and Computer Numeric Control (CNC) milling. These tactile image boards preserve high levels of detail when compared to the original print image. To determine the discernibility and effectiveness of tactile images, these customizable boards were tested in various university classrooms as well as in participation studies which included BVI and sighted students. Results from these studies indicate that tactile images are discernable and were found to improve performance in lab exercises as much as 60% for those with visual impairment. Incorporating tactile HDPE 3D images into a classroom setting was shown to

  11. Shifting Sands and Turning Tides: Using 3D Visualization Technology to Shape the Environment for Undergraduate Students

    Science.gov (United States)

    Jenkins, H. S.; Gant, R.; Hopkins, D.

    2014-12-01

    Teaching natural science in a technologically advancing world requires that our methods reach beyond the traditional computer interface. Innovative 3D visualization techniques and real-time augmented user interfaces enable students to create realistic environments to understand the world around them. Here, we present a series of laboratory activities that utilize an Augmented Reality Sandbox to teach basic concepts of hydrology, geology, and geography to undergraduates at Harvard University and the University of Redlands. The Augmented Reality (AR) Sandbox utilizes a real sandbox that is overlain by a digital projection of topography and a color elevation map. A Microsoft Kinect 3D camera feeds altimetry data into a software program that maps this information onto the sand surface using a digital projector. Students can then manipulate the sand and observe as the Sandbox augments their manipulations with projections of contour lines, an elevation color map, and a simulation of water. The idea for the AR Sandbox was conceived at MIT by the Tangible Media Group in 2002 and the simulation software used here was written and developed by Dr. Oliver Kreylos of the University of California - Davis as part of the NSF funded LakeViz3D project. Between 2013 and 2014, we installed AR Sandboxes at Harvard and the University of Redlands, respectively, and developed laboratory exercises to teach flooding hazard, erosion and watershed development in undergraduate earth and environmental science courses. In 2013, we introduced a series of AR Sandbox laboratories in Introductory Geology, Hydrology, and Natural Disasters courses. We found laboratories that utilized the AR Sandbox at both universities allowed students to become quickly immersed in the learning process, enabling a more intuitive understanding of the processes that govern the natural world. The physical interface of the AR Sandbox reduces barriers to learning, can be used to rapidly illustrate basic concepts of geology

  12. In vivo and 3D visualization of coronary artery development by optical coherence tomography - art. no. 662709

    DEFF Research Database (Denmark)

    Thrane, Lars; Norozi, K.; Männer, J.

    2007-01-01

    . The in vivo images were generated by optical coherence tomography (OCT). The OCT system used in this study is a mobile fiber-based time-domain real-time OCT system operating with a center wavelength of 1330 nm, an A-scan rate of 4 kHz, and a typical frame rate of 8 frames/s. The axial resolution is 17 mu m......One of the most critical but poorly understood processes during cardiovascular development is the establishment of a functioning coronary artery (CA) system. Due to the lack of suitable imaging technologies, it is currently impossible to visualize this complex dynamic process on living human...... (in tissue), and the lateral resolution is 30 mu m. The OCT system is optimized for in vivo chick heart visualization and enables OCT movie recording with 8 frames/s, full-automatic 3D OCT scanning, and blood flow visualization, i.e., Doppler OCT imaging. Using this OCT system, we generated in vivo...

  13. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    Science.gov (United States)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  14. 3D Geospatial Models for Visualization and Analysis of Groundwater Contamination at a Nuclear Materials Processing Facility

    Science.gov (United States)

    Stirewalt, G. L.; Shepherd, J. C.

    2003-12-01

    Analysis of hydrostratigraphy and uranium and nitrate contamination in groundwater at a former nuclear materials processing facility in Oklahoma were undertaken employing 3-dimensional (3D) geospatial modeling software. Models constructed played an important role in the regulatory decision process of the U.S. Nuclear Regulatory Commission (NRC) because they enabled visualization of temporal variations in contaminant concentrations and plume geometry. Three aquifer systems occur at the site, comprised of water-bearing fractured shales separated by indurated sandstone aquitards. The uppermost terrace groundwater system (TGWS) aquifer is composed of terrace and alluvial deposits and a basal shale. The shallow groundwater system (SGWS) aquifer is made up of three shale units and two sandstones. It is separated from the overlying TGWS and underlying deep groundwater system (DGWS) aquifer by sandstone aquitards. Spills of nitric acid solutions containing uranium and radioactive decay products around the main processing building (MPB), leakage from storage ponds west of the MPB, and leaching of radioactive materials from discarded equipment and waste containers contaminated both the TGWS and SGWS aquifers during facility operation between 1970 and 1993. Constructing 3D geospatial property models for analysis of groundwater contamination at the site involved use of EarthVision (EV), a 3D geospatial modeling software developed by Dynamic Graphics, Inc. of Alameda, CA. A viable 3D geohydrologic framework model was initially constructed so property data could be spatially located relative to subsurface geohydrologic units. The framework model contained three hydrostratigraphic zones equivalent to the TGWS, SGWS, and DGWS aquifers in which groundwater samples were collected, separated by two sandstone aquitards. Groundwater data collected in the three aquifer systems since 1991 indicated high concentrations of uranium (>10,000 micrograms/liter) and nitrate (> 500 milligrams

  15. 3D visualization and quantification of bone and teeth mineralization for the study of osteo/dentinogenesis in mice models

    Science.gov (United States)

    Marchadier, A.; Vidal, C.; Ordureau, S.; Lédée, R.; Léger, C.; Young, M.; Goldberg, M.

    2011-03-01

    Research on bone and teeth mineralization in animal models is critical for understanding human pathologies. Genetically modified mice represent highly valuable models for the study of osteo/dentinogenesis defects and osteoporosis. Current investigations on mice dental and skeletal phenotype use destructive and time consuming methods such as histology and scanning microscopy. Micro-CT imaging is quicker and provides high resolution qualitative phenotypic description. However reliable quantification of mineralization processes in mouse bone and teeth are still lacking. We have established novel CT imaging-based software for accurate qualitative and quantitative analysis of mouse mandibular bone and molars. Data were obtained from mandibles of mice lacking the Fibromodulin gene which is involved in mineralization processes. Mandibles were imaged with a micro-CT originally devoted to industrial applications (Viscom, X8060 NDT). 3D advanced visualization was performed using the VoxBox software (UsefulProgress) with ray casting algorithms. Comparison between control and defective mice mandibles was made by applying the same transfer function for each 3D data, thus allowing to detect shape, colour and density discrepencies. The 2D images of transverse slices of mandible and teeth were similar and even more accurate than those obtained with scanning electron microscopy. Image processing of the molars allowed the 3D reconstruction of the pulp chamber, providing a unique tool for the quantitative evaluation of dentinogenesis. This new method is highly powerful for the study of oro-facial mineralizations defects in mice models, complementary and even competitive to current histological and scanning microscopy appoaches.

  16. Development of 3D Visualization Technology for Medium-and Large-sized Radioactive Metal Wastes from Decommissioning Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, A Rim; Park, Chan Hee; Lee, Jung Min; Kim, Rinah; Moon, Joo Hyun [Dongguk Univ., Gyongju (Korea, Republic of)

    2013-10-15

    The most important point of decommissioning nuclear facilities and nuclear power plants is to spend less money and do this process safely. In order to perform a better decommissioning nuclear facilities and nuclear power plants, a data base of radioactive waste from decontamination and decommissioning of nuclear facilities should be constructed. This data base is described herein, from the radioactive nuclide to the shape of component of nuclear facilities, and representative results of the status and analysis are presented. With the increase in number of nuclear facilities at the end of their useful life, the demand of decommissioning technologies will continue to grow for years to come. This analysis of medium-and large-sized radioactive metal wastes and 3D visualization technology of the radioactive metal wastes using the 3D-SCAN are planned to be used for constructing data bases. The data bases are expected to be used on development of the basic technologies for decommissioning nuclear facilities 4 session.

  17. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    Science.gov (United States)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  18. Morphological image processing operators. Reduction of partial volume effects to improve 3D visualization based on CT data

    International Nuclear Information System (INIS)

    Beier, J.; Bittner, R.C.; Hosten, N.; Troeger, J.; Felix, R.

    1998-01-01

    Aim: The quality of segmentation and three-dimensional reconstruction of anatomical structures in tomographic slices is often impaired by disturbances due to partial volume effects (PVE). The potential for artefact reduction by use of the morphological image processing operators (MO) erosion and dilation is investigated. Results: For all patients under review, the artefacts caused by PVE were significantly reduced by erosion (lung: Mean SBR pre =1.67, SBR post =4.83; brain: SBR pre =1.06, SBR post =1.29) even with only a small number of iterations. Region dilation was applied to integrate further structures (e.g. at tumor borders) into a configurable neighbourhood for segmentation and quantitative analysis. Conclusions: The MO represent an efficient approach for the reduction of PVE artefacts in 3D-CT reconstructions and allow optimised visualization of individual objects. (orig./AJ) [de

  19. Visual grading of 2D and 3D functional MRI compared with image-based descriptive measures

    Energy Technology Data Exchange (ETDEWEB)

    Ragnehed, Mattias [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Department of Medical and Health Sciences, Division of Radiological Sciences/Radiology, Faculty of Health Sciences, Linkoeping (Sweden); Leinhard, Olof Dahlqvist; Pihlsgaard, Johan; Lundberg, Peter [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Division of Radiological Sciences, Radiation Physics, IMH, Linkoeping (Sweden); Wirell, Staffan [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Soekjer, Hannibal; Faegerstam, Patrik [Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Jiang, Bo [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Smedby, Oerjan; Engstroem, Maria [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden)

    2010-03-15

    A prerequisite for successful clinical use of functional magnetic resonance imaging (fMRI) is the selection of an appropriate imaging sequence. The aim of this study was to compare 2D and 3D fMRI sequences using different image quality assessment methods. Descriptive image measures, such as activation volume and temporal signal-to-noise ratio (TSNR), were compared with results from visual grading characteristics (VGC) analysis of the fMRI results. Significant differences in activation volume and TSNR were not directly reflected by differences in VGC scores. The results suggest that better performance on descriptive image measures is not always an indicator of improved diagnostic quality of the fMRI results. In addition to descriptive image measures, it is important to include measures of diagnostic quality when comparing different fMRI data acquisition methods. (orig.)

  20. Photothermal coherence tomography for 3-D visualization and structural non-destructive imaging of a wood inlay

    Science.gov (United States)

    Tavakolian, Pantea; Sfarra, Stefano; Gargiulo, Gianfranco; Sivagurunathan, Koneshwaran; Mandelis, Andreas

    2018-06-01

    The aim of this research is to investigate the suitability of truncated correlation photothermal coherence tomography (TC-PCT) for the non-destructive imaging of a replica of a real inlay to identify subsurface features that often are invisible areas of vulnerability and damage. Defects of inlays involve glue-rich areas, glue-starved areas, termite attack, insect damage, and laminar splitting. These defects have the potential to result in extensive damage to the art design layers of inlays. Therefore, there is a need for an imaging technique to visualize and determine the location of defects within the sample. The recently introduced TC-PCT modality proved capable of providing 3-D images of specimens with high axial resolution, deep subsurface depth profiling capability, and high signal-to-noise ratio (SNR). Therefore, in this study the authors used TC-PCT to image a fabricated inlay sample with various natural and artificial defects in the middle and top layers. The inlay in question reproduces to scale a piece of art preserved in the "Mirror room" of the Castle Laffitte in France. It was built by a professional restorer following the ancient procedure named element by element. Planar TC-PCT images of the inlay were stacked coherently to provide 3-D visualization of areas with known defects in the sample. The experimental results demonstrated the identification of defects such as empty holes, a hole filled with stucco, subsurface delaminations and natural features such as a wood knot and wood grain in different layers of the sample. For this wooden sample that has a very low thermal diffusivity, a depth range of 2 mm was achieved.

  1. Visualization tool for three-dimensional plasma velocity distributions (ISEE_3D) as a plug-in for SPEDAS

    Science.gov (United States)

    Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron

    2017-12-01

    This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.

  2. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    Science.gov (United States)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  3. Internal structures of scaffold-free 3D cell cultures visualized by synchrotron radiation-based micro-computed tomography

    Science.gov (United States)

    Saldamli, Belma; Herzen, Julia; Beckmann, Felix; Tübel, Jutta; Schauwecker, Johannes; Burgkart, Rainer; Jürgens, Philipp; Zeilhofer, Hans-Florian; Sader, Robert; Müller, Bert

    2008-08-01

    Recently the importance of the third dimension in cell biology has been better understood, resulting in a re-orientation towards three-dimensional (3D) cultivation. Yet adequate tools for their morphological characterization have to be established. Synchrotron radiation-based micro computed tomography (SRμCT) allows visualizing such biological systems with almost isotropic micrometer resolution, non-destructively. We have applied SRμCT for studying the internal morphology of human osteoblast-derived, scaffold-free 3D cultures, termed histoids. Primary human osteoblasts, isolated from femoral neck spongy bone, were grown as 2D culture in non-mineralizing osteogenic medium until a rather thick, multi-cellular membrane was formed. This delicate system was intentionally released to randomly fold itself. The folded cell cultures were grown to histoids of cubic milli- or centimeter size in various combinations of mineralizing and non-mineralizing osteogenic medium for a total period of minimum 56 weeks. The SRμCT-measurements were performed in the absorption contrast mode at the beamlines BW 2 and W 2 (HASYLAB at DESY, Hamburg, Germany), operated by the GKSS-Research Center. To investigate the entire volume of interest several scans were performed under identical conditions and registered to obtain one single dataset of each sample. The histoids grown under different conditions exhibit similar external morphology of globular or ovoid shape. The SRμCT-examination revealed the distinctly different morphological structures inside the histoids. One obtains details of the histoids that permit to identify and select the most promising slices for subsequent histological characterization.

  4. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory.

    Science.gov (United States)

    Murty, Vishnu P; Tompary, Alexa; Adcock, R Alison; Davachi, Lila

    2017-01-18

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the

  5. Velocity-dependent changes of rotational axes in the non-visual control of unconstrained 3D arm motions.

    Science.gov (United States)

    Isableu, B; Rezzoug, N; Mallet, G; Bernardin, D; Gorce, P; Pagano, C C

    2009-12-29

    We examined the roles of inertial (e(3)), shoulder-centre of mass (SH-CM) and shoulder-elbow articular (SH-EL) rotation axes in the non-visual control of unconstrained 3D arm rotations. Subjects rotated the arm in elbow configurations that yielded either a constant or variable separation between these axes. We hypothesized that increasing the motion frequency and the task complexity would result in the limbs' rotational axis to correspond to e(3) in order to minimize rotational resistances. Results showed two velocity-dependent profiles wherein the rotation axis coincided with the SH-EL axis for S and I velocities and then in the F velocity shifted to either a SH-CM/e(3) trade-off axis for one profile, or to no preferential axis for the other. A third profile was velocity-independent, with the SH-CM/e(3) trade-off axis being adopted. Our results are the first to provide evidence that the rotational axis of a multi-articulated limb may change from a geometrical axis of rotation to a mass or inertia based axis as motion frequency increases. These findings are discussed within the framework of the minimum inertia tensor model (MIT), which shows that rotations about e(3) reduce the amount of joint muscle torque that must be produced by employing the interaction torque to assist movement.

  6. GIS based 3D visualization of subsurface and surface lineaments / faults and their geological significance, northern tamil nadu, India

    Science.gov (United States)

    Saravanavel, J.; Ramasamy, S. M.

    2014-11-01

    The study area falls in the southern part of the Indian Peninsular comprising hard crystalline rocks of Archaeozoic and Proterozoic Era. In the present study, the GIS based 3D visualizations of gravity, magnetic, resistivity and topographic datasets were made and therefrom the basement lineaments, shallow subsurface lineaments and surface lineaments/faults were interpreted. These lineaments were classified as category-1 i.e. exclusively surface lineaments, category-2 i.e. surface lineaments having connectivity with shallow subsurface lineaments and category-3 i.e. surface lineaments having connectivity with shallow subsurface lineaments and basement lineaments. These three classified lineaments were analyzed in conjunction with known mineral occurrences and historical seismicity of the study area in GIS environment. The study revealed that the category-3 NNE-SSW to NE-SW lineaments have greater control over the mineral occurrences and the N-S, NNE-SSW and NE-SW, faults/lineaments control the seismicities in the study area.

  7. Visualizing Sungai Batu Ancient River, Lembah Bujang Archeology Site, Kedah – Malaysia using 3-D Resistivity Imaging

    Science.gov (United States)

    Yusoh, R.; Saad, R.; Saidin, M.; Muhammad, S. B.; Anda, S. T.; Ashraf, M. A. M.; Hazreek, Z. A. M.

    2018-04-01

    Sungai Batu at lembah bujang has become an interest spot for archeologist since it was discover as earliest entrepot in history of Malaysia. It is believe that there was a large lost river near the ancient jetty remain. Ground resistivity method was implement with large coverage area to locate the ancient river direction. Eleven ground resistivity survey line was carry out using SAS4000 equipment and wenner-schlumberger array was applied for measurement. Ground resistivity method was used to detect the alluvial deposit made by the ancient river deposition. The ground resistivity data were produce in 2D image and presented in 3D contour map for various selected depth by using Rockwork 15 and Surfer 8 software to visualize the alluvial deposits area. The results from the survey has found the appearance of sedimentation formation due to low resistivity value (0 – 330 ohm.m) was found near the existing river. However, the width of alluvial deposition was 1400 m which too wide for river width unless it was a deposition happen form age to age by movement of river meander. It’s conclude that the river was still at the same direction and its direction was change due to sediment dumping factor waking it shifting to the east.

  8. 3D visualization reduces operating time when compared to high-definition 2D in laparoscopic liver resection: a case-matched study.

    Science.gov (United States)

    Velayutham, Vimalraj; Fuks, David; Nomi, Takeo; Kawaguchi, Yoshikuni; Gayet, Brice

    2016-01-01

    To evaluate the effect of three-dimensional (3D) visualization on operative performance during elective laparoscopic liver resection (LLR). Major limitations of conventional laparoscopy are lack of depth perception and tactile feedback. Introduction of robotic technology, which employs 3D imaging, has removed only one of these technical obstacles. Despite the significant advantages claimed, 3D systems have not been widely accepted. In this single institutional study, 20 patients undergoing LLR by high-definition 3D laparoscope between April 2014 and August 2014 were matched to a retrospective control group of patients who underwent LLR by two-dimensional (2D) laparoscope. The number of patients who underwent major liver resection was 5 (25%) in the 3D group and 10 (25%) in the 2D group. There was no significant difference in contralateral wedge resection or combined resections between the 3D and 2D groups. There was no difference in the proportion of patients undergoing previous abdominal surgery (70 vs. 77%, p = 0.523) or previous hepatectomy (20 vs. 27.5%, p = 0.75). The operative time was significantly shorter in the 3D group when compared to 2D (225 ± 109 vs. 284 ± 71 min, p = 0.03). There was no significant difference in blood loss in the 3D group when compared to 2D group (204 ± 226 in 3D vs. 252 ± 349 ml in 2D group, p = 0.291). The major complication rates were similar, 5% (1/20) and 7.5% (3/40), respectively, (p ≥ 0.99). 3D visualization may reduce the operating time compared to high-definition 2D. Further large studies, preferably prospective randomized control trials are required to confirm this.

  9. RECONSTRUCTION, QUANTIFICATION, AND VISUALIZATION OF FOREST CANOPY BASED ON 3D TRIANGULATIONS OF AIRBORNE LASER SCANNING POINT DATA

    Directory of Open Access Journals (Sweden)

    J. Vauhkonen

    2015-03-01

    Full Text Available Reconstruction of three-dimensional (3D forest canopy is described and quantified using airborne laser scanning (ALS data with densities of 0.6–0.8 points m-2 and field measurements aggregated at resolutions of 400–900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i to optimize the degree of filtration with respect to the field measurements, and (ii to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2 with the stem volume considered, both alone (R2=0.65 and together with other predictors (R2=0.78. When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  10. 3D visualization and finite element mesh formation from wood anatomy samples, Part I – Theoretical approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available The work summarizes created algorithms for formation of finite element (FE mesh which is derived from bitmap pattern. Process of registration, segmentation and meshing is described in detail. C++ library of STL from Insight Toolkit (ITK Project together with Visualization Toolkit (VTK were used for base processing of images. Several methods for appropriate mesh output are discussed. Multiplatform application WOOD3D for the task under GNU GPL license was assembled. Several methods of segmentation and mainly different ways of contouring were included. Tetrahedral and rectilinear types of mesh were programmed. Improving of mesh quality in some simple ways is mentioned. Testing and verification of final program on wood anatomy samples of spruce and walnut was realized. Methods of microscopic anatomy samples preparation are depicted. Final utilization of formed mesh in the simple structural analysis was performed.The article discusses main problems in image analysis due to incompatible colour spaces, samples preparation, thresholding and final conversion into finite element mesh. Assembling of mentioned tasks together and evaluation of the application are main original results of the presented work. In presented program two thresholding filters were used. By utilization of ITK two following filters were included. Otsu filter based and binary filter based were used. The most problematic task occurred in a production of wood anatomy samples in the unique light conditions with minimal or zero co­lour space shift and the following appropriate definition of thresholds (corresponding thresholding parameters and connected methods (prefiltering + registration which influence the continuity and mainly separation of wood anatomy structure. Solution in samples staining is suggested with the following quick image analysis realization. Next original result of the work is complex fully automated application which offers three types of finite element mesh

  11. CAIPIRINHA accelerated SPACE enables 10-min isotropic 3D TSE MRI of the ankle for optimized visualization of curved and oblique ligaments and tendons

    Energy Technology Data Exchange (ETDEWEB)

    Kalia, Vivek [University of Vermont Medical Center, Department of Radiology, Burlington, VT (United States); Johns Hopkins University School of Medicine, Russell H. Morgan Department of Radiology and Radiological Science, Section of Musculoskeletal Radiology, Baltimore, MD (United States); Fritz, Benjamin [University Medical Center Freiburg, Department of Radiology, Freiburg im Breisgau (Germany); Johnson, Rory [Siemens Healthcare USA, Inc, Cary, NC (United States); Gilson, Wesley D. [Siemens Healthcare USA, Inc, Baltimore, MD (United States); Raithel, Esther [Siemens Healthcare GmbH, Erlangen (Germany); Fritz, Jan [Johns Hopkins University School of Medicine, Russell H. Morgan Department of Radiology and Radiological Science, Section of Musculoskeletal Radiology, Baltimore, MD (United States)

    2017-09-15

    To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p < 0.001), whereas 3D images had substantially less partial volume, chemical shift and no pulsatile-flow artifacts (p < 0.001). Oblique and curved planar 3D images resulted in mildly-to-substantially improved visualization of joints, spring, bifurcate, syndesmotic, collateral and sinus tarsi ligaments, and tendons (p < 0.001, respectively). 3D TSE MRI with CAIPIRINHA acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. (orig.)

  12. CAIPIRINHA accelerated SPACE enables 10-min isotropic 3D TSE MRI of the ankle for optimized visualization of curved and oblique ligaments and tendons

    International Nuclear Information System (INIS)

    Kalia, Vivek; Fritz, Benjamin; Johnson, Rory; Gilson, Wesley D.; Raithel, Esther; Fritz, Jan

    2017-01-01

    To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p < 0.001), whereas 3D images had substantially less partial volume, chemical shift and no pulsatile-flow artifacts (p < 0.001). Oblique and curved planar 3D images resulted in mildly-to-substantially improved visualization of joints, spring, bifurcate, syndesmotic, collateral and sinus tarsi ligaments, and tendons (p < 0.001, respectively). 3D TSE MRI with CAIPIRINHA acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. (orig.)

  13. CAIPIRINHA accelerated SPACE enables 10-min isotropic 3D TSE MRI of the ankle for optimized visualization of curved and oblique ligaments and tendons.

    Science.gov (United States)

    Kalia, Vivek; Fritz, Benjamin; Johnson, Rory; Gilson, Wesley D; Raithel, Esther; Fritz, Jan

    2017-09-01

    To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. • High-resolution 3D TSE MRI improves visualization of ankle structures. • Limitations of current 3D TSE MRI include long scan times. • 3D CAIPIRINHA SPACE allows now a fourfold-accelerated data acquisition. • 3D CAIPIRINHA SPACE enables high-spatial-resolution ankle MRI within 10 min. • 10-min 3D CAIPIRINHA SPACE produces equal-or-better quality than 20-min 2D TSE.

  14. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Directory of Open Access Journals (Sweden)

    Akitoshi Ogawa

    Full Text Available The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion. Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround, 3D with monaural sound (3D-Mono, 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG. The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life

  15. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Science.gov (United States)

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  16. 3D Isotropic MR Culprit Plaque Visualization of Carotid Plaque Edema and Hemorrhage with Motion Sensitized Blood Suppression

    DEFF Research Database (Denmark)

    Søvsø Szocska Hansen, Esben; Pedersen, Steen Fjord; Bloch, Lars Ø.

    2014-01-01

    hemorrhage and plaque edema may represent advanced stages of atherosclerosis[1, 2]. In this study, we present a novel multi-contrast 3D motion sensitized black-blood CMR imaging sequence, which detects both plaque edema and hemorrhage with positive contrast. Subjects and Methods The 3D imaging sequence...... to lumen was 39.74±6.75. Discussion/Conclusion In conclusion, the proposed 3D isotropic multi-contrast CMR technique detects plaque edema and hemorrhage with positive contrast and excellent black-blood contrast, which may facilitate evaluation of carotid atherosclerosis. Ongoing studies will include CMR...

  17. 3D visual analysis tool in support of the SANDF's growing ground based air defence simulation capability

    CSIR Research Space (South Africa)

    Duvenhage, B

    2007-10-01

    Full Text Available and live field exercises. The 3D visualisation resulted in improved situational awareness during experiment analysis, in increased involvement of the SANDF in experiment analysis and in improved credibility of analysis results presented during live or after...

  18. Poster: Observing change in crowded data sets in 3D space - Visualizing gene expression in human tissues

    KAUST Repository

    Rogowski, Marcin; Cannistraci, Carlo; Alanis Lobato, Gregorio; Weber, Philip P.; Ravasi, Timothy; Schulze, Jü rgen P.; Acevedo-Feliz, Daniel

    2013-01-01

    as opposed to force-directed layouts encountered most often in similar problems. We are discussing the methods we devised to make observing change more convenient in a 3D virtual reality environment. © 2013 IEEE.

  19. Presurgical visualization of the neurovascular relationship in trigeminal neuralgia with 3D modeling using free Slicer software.

    Science.gov (United States)

    Han, Kai-Wei; Zhang, Dan-Feng; Chen, Ji-Gang; Hou, Li-Jun

    2016-11-01

    To explore whether segmentation and 3D modeling are more accurate in the preoperative detection of the neurovascular relationship (NVR) in patients with trigeminal neuralgia (TN) compared to MRI fast imaging employing steady-state acquisition (FIESTA). Segmentation and 3D modeling using 3D Slicer were conducted for 40 patients undergoing MRI FIESTA and microsurgical vascular decompression (MVD). The NVR, as well as the offending vessel determined by MRI FIESTA and 3D Slicer, was reviewed and compared with intraoperative manifestations using SPSS. The k agreement between the MRI FIESTA and operation in determining the NVR was 0.232 and that between the 3D modeling and operation was 0.6333. There was no significant difference between these two procedures (χ 2  = 8.09, P = 0.088). The k agreement between the MRI FIESTA and operation in determining the offending vessel was 0.373, and that between the 3D modeling and operation was 0.922. There were significant differences between two of them (χ 2  = 82.01, P = 0.000). The sensitivity and specificity for MRI FIESTA in determining the NVR were 87.2 % and 100 %, respectively, and for 3D modeling were both 100 %. The segmentation and 3D modeling were more accurate than MRI FIESTA in preoperative verification of the NVR and offending vessel. This was consistent with surgical manifestations and was more helpful for the preoperative decision and surgical plan.

  20. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM

    Czech Academy of Sciences Publication Activity Database

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-01-01

    Roč. 45, č. 5 (2016), s. 443-461 ISSN 0175-7571 R&D Projects: GA ČR(CZ) GA13-02033S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:67985823 Keywords : 3D object segmentation * Delaunay algorithm * principal component analysis * 3D super-resolution microscopy * nucleoids * mitochondrial DNA replication Subject RIV: BO - Biophysics Impact factor: 1.472, year: 2016

  1. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Neltje E. Piro

    2016-06-01

    Full Text Available Remote monitoring of Parkinson’s Disease (PD patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a an examination based on video recordings as a clinical reference; (b an automatically classified UPDRS; and (c a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b the automatically classified UPDRS is 0.48 and with (c the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  2. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    Science.gov (United States)

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  3. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    Science.gov (United States)

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  4. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    Science.gov (United States)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  5. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  6. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong

    2012-02-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists\\' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  7. A possible concept for an interactive 3D visualization system for training and planning of liver surgery

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Darvann, T.; Damgaard, K.

    1996-01-01

    A demonstration of a fully interactive (20 frames per second) 3D graphics display of the blood vessels supporting the biliary tree and bileduct, automatically segmented from CT data, is given. Emphasis is on speed of interaction, modularity and programmer friendliness of graphics programming...

  8. Visualization of Buffer Capacity with 3-D "Topo" Surfaces: Buffer Ridges, Equivalence Point Canyons and Dilution Ramps

    Science.gov (United States)

    Smith, Garon C.; Hossain, Md Mainul

    2016-01-01

    BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…

  9. How Students and Field Geologists Reason in Integrating Spatial Observations from Outcrops to Visualize a 3-D Geological Structure

    Science.gov (United States)

    Kastens, Kim A.; Agrawal, Shruti; Liben, Lynn S.

    2009-01-01

    Geologists and undergraduate students observed eight artificial "rock outcrops" in a realistically scaled field area, and then tried to envision a geological structure that might plausibly be formed by the layered rocks in the set of outcrops. Students were videotaped as they selected which of fourteen 3-D models they thought best…

  10. 3D imaging of cleared human skin biopsies using light-sheet microscopy: A new way to visualize in-depth skin structure.

    Science.gov (United States)

    Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J

    2018-05-01

    Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Research-Grade 3D Virtual Astromaterials Samples: Novel Visualization of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Benefit Curation, Research, and Education

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K. R.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2017-01-01

    NASA's vast and growing collections of astromaterials are both scientifically and culturally significant, requiring unique preservation strategies that need to be recurrently updated to contemporary technological capabilities and increasing accessibility demands. New technologies have made it possible to advance documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. Our interdisciplinary team has developed a method to create 3D Virtual Astromaterials Samples (VAS) of the existing collections of Apollo Lunar Samples and Antarctic Meteorites. Research-grade 3D VAS will virtually put these samples in the hands of researchers and educators worldwide, increasing accessibility and visibility of these significant collections. With new sample return missions on the horizon, it is of primary importance to develop advanced curation standards for documentation and visualization methodologies.

  12. GBS: Guidance by Semantics-Using High-Level Visual Inference to Improve Vision-Based Mobile Robot Localization

    Science.gov (United States)

    2015-08-28

    Major Findings Each major finding is demarcated by a horizontal rule , and a large, boldface heading. Guidance by Semantics Early Work with ARL on Data...Transactions on Medical Imaging, 27(5):629–640, 2008. [CSF+12] M. Cognetti, P. Stegagno, A. Franchi , G. Oriolo, and H.H. Bulthoff. 3-D mutual localiza...Segmentation. International Journal of Computer Vision, 59(2):167–181, 2004. [FOS09] A. Franchi , G. Oriolo, and P. Stegagno. Mutual localization in a

  13. OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.

    Science.gov (United States)

    Zhou, Guangyan; Xia, Jianguo

    2018-06-07

    Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.

  14. Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    NARCIS (Netherlands)

    Vlaming, Luc; Collins, Christopher; Hancock, Mark; Nacenta, Miguel; Isenberg, Tobias; Carpendale, Sheelagh

    2010-01-01

    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch

  15. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    Directory of Open Access Journals (Sweden)

    Anne Jordt

    2015-12-01

    Full Text Available Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.

  16. 3D Visualization of Developmental Toxicity of 2,4,6-Trinitrotoluene in Zebrafish Embryogenesis Using Light-Sheet Microscopy

    Directory of Open Access Journals (Sweden)

    Juneyong Eum

    2016-11-01

    Full Text Available Environmental contamination by trinitrotoluene is of global concern due to its widespread use in military ordnance and commercial explosives. Despite known long-term persistence in groundwater and soil, the toxicological profile of trinitrotoluene and other explosive wastes have not been systematically measured using in vivo biological assays. Zebrafish embryos are ideal model vertebrates for high-throughput toxicity screening and live in vivo imaging due to their small size and transparency during embryogenesis. Here, we used Single Plane Illumination Microscopy (SPIM/light sheet microscopy to assess the developmental toxicity of explosive-contaminated water in zebrafish embryos and report 2,4,6-trinitrotoluene-associated developmental abnormalities, including defects in heart formation and circulation, in 3D. Levels of apoptotic cell death were higher in the actively developing tissues of trinitrotoluene-treated embryos than controls. Live 3D imaging of heart tube development at cellular resolution by light-sheet microscopy revealed trinitrotoluene-associated cardiac toxicity, including hypoplastic heart chamber formation and cardiac looping defects, while the real time PCR (polymerase chain reaction quantitatively measured the molecular changes in the heart and blood development supporting the developmental defects at the molecular level. Identification of cellular toxicity in zebrafish using the state-of-the-art 3D imaging system could form the basis of a sensitive biosensor for environmental contaminants and be further valued by combining it with molecular analysis.

  17. 3D Surgical Simulation

    OpenAIRE

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2010-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive ...

  18. Thoracoscopic anatomical lung segmentectomy using 3D computed tomography simulation without tumour markings for non-palpable and non-visualized small lung nodules.

    Science.gov (United States)

    Kato, Hirohisa; Oizumi, Hiroyuki; Suzuki, Jun; Hamada, Akira; Watarai, Hikaru; Sadahiro, Mitsuaki

    2017-09-01

    Although wedge resection can be curative for small lung tumours, tumour marking is sometimes required for resection of non-palpable or visually undetectable lung nodules as a method for identification of tumours. Tumour marking sometimes fails and occasionally causes serious complications. We have performed many thoracoscopic segmentectomies using 3D computed tomography simulation for undetectable small lung tumours without any tumour markings. The aim of this study was to investigate whether thoracoscopic segmentectomy planned with 3D computed tomography simulation could precisely remove non-palpable and visually undetectable tumours. Between January 2012 and March 2016, 58 patients underwent thoracoscopic segmentectomy using 3D computed tomography simulation for non-palpable, visually undetectable tumours. Surgical outcomes were evaluated. A total of 35, 14 and 9 patients underwent segmentectomy, subsegmentectomy and segmentectomy combined with adjacent subsegmentectomy, respectively. All tumours were correctly resected without tumour marking. The median tumour size and distance from the visceral pleura was 14 ± 5.2 mm (range 5-27 mm) and 11.6 mm (range 1-38.8 mm), respectively. Median values related to the procedures were operative time, 176 min (range 83-370 min); blood loss, 43 ml (range 0-419 ml); duration of chest tube placement, 1 day (range 1-8 days); and postoperative hospital stay, 5 days (range 3-12 days). Two cases were converted to open thoracotomy due to bleeding. Three cases required pleurodesis for pleural fistula. No recurrences occurred during the mean follow-up period of 44.4 months (range 5-53 months). Thoracoscopic segmentectomy using 3D computed tomography simulation was feasible and could be performed to resect undetectable tumours with no tumour markings. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  19. Multimodal-3D imaging based on μMRI and μCT techniques bridges the gap with histology in visualization of the bone regeneration process.

    Science.gov (United States)

    Sinibaldi, R; Conti, A; Sinjari, B; Spadone, S; Pecci, R; Palombo, M; Komlev, V S; Ortore, M G; Tromba, G; Capuani, S; Guidotti, R; De Luca, F; Caputi, S; Traini, T; Della Penna, S

    2018-03-01

    Bone repair/regeneration is usually investigated through X-ray computed microtomography (μCT) supported by histology of extracted samples, to analyse biomaterial structure and new bone formation processes. Magnetic resonance imaging (μMRI) shows a richer tissue contrast than μCT, despite at lower resolution, and could be combined with μCT in the perspective of conducting non-destructive 3D investigations of bone. A pipeline designed to combine μMRI and μCT images of bone samples is here described and applied on samples of extracted human jawbone core following bone graft. We optimized the coregistration procedure between μCT and μMRI images to avoid bias due to the different resolutions and contrasts. Furthermore, we used an Adaptive Multivariate Clustering, grouping homologous voxels in the coregistered images, to visualize different tissue types within a fused 3D metastructure. The tissue grouping matched the 2D histology applied only on 1 slice, thus extending the histology labelling in 3D. Specifically, in all samples, we could separate and map 2 types of regenerated bone, calcified tissue, soft tissues, and/or fat and marrow space. Remarkably, μMRI and μCT alone were not able to separate the 2 types of regenerated bone. Finally, we computed volumes of each tissue in the 3D metastructures, which might be exploited by quantitative simulation. The 3D metastructure obtained through our pipeline represents a first step to bridge the gap between the quality of information obtained from 2D optical microscopy and the 3D mapping of the bone tissue heterogeneity and could allow researchers and clinicians to non-destructively characterize and follow-up bone regeneration. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Toward 3-D E-field visualization in laser-produced plasma by polarization-spectroscopic imaging

    International Nuclear Information System (INIS)

    Kim, Yong W.

    2004-01-01

    A 3-D volume radiator such as laser-produced plasma (LPP) plumes is observed in the form of a 2-D projection of its radiative structure. The traditional approach to 3-D structure reconstruction relies on multiple projections but is not suitable as a general method for unsteady radiating objects. We have developed a general method for 3-D structure reconstruction for LPP plumes in stages of increasing complexity. We have chosen neutral gas-confined LPP plumes from an aluminum target immersed in high-density argon because the plasma experiences Rayleigh-Taylor instability. We make use of two time-resolved, mutually orthogonal side views of a LPP plume and a front-view snapshot. No symmetry assumptions are needed. Two scaling relations are invoked that connects the plasma temperature and pressure to local specific intensity at selected wavelength(s). Two mutually-orthogonal lateral luminosity views of the plume at each known distance from the target surface are compared with those computed from the trial specific intensity profiles and the scaling relations. The luminosity error signals are minimized to find the structure. The front-view snapshot is used to select the initial trial profile and as a weighting function for allocation of the error signal into corrections for specific intensities from the plasma cells along the line of sight. Full Saha equilibrium for multiple stages of ionization is treated, together with the self-absorption, in the computation of the luminosity. We show the necessary optics for determination of local electric fields through polarization-resolved imaging. (author)

  1. Patient-specific 3D FLAIR for enhanced visualization of brain white matter lesions in multiple sclerosis.

    Science.gov (United States)

    Gabr, Refaat E; Pednekar, Amol S; Govindarajan, Koushik A; Sun, Xiaojun; Riascos, Roy F; Ramírez, María G; Hasan, Khader M; Lincoln, John A; Nelson, Flavia; Wolinsky, Jerry S; Narayana, Ponnada A

    2017-08-01

    To improve the conspicuity of white matter lesions (WMLs) in multiple sclerosis (MS) using patient-specific optimization of single-slab 3D fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI). Sixteen MS patients were enrolled in a prospective 3.0T MRI study. FLAIR inversion time and echo time were automatically optimized for each patient during the same scan session based on measurements of the relative proton density and relaxation times of the brain tissues. The optimization criterion was to maximize the contrast between gray matter (GM) and white matter (WM), while suppressing cerebrospinal fluid. This criterion also helps increase the contrast between WMLs and WM. The performance of the patient-specific 3D FLAIR protocol relative to the fixed-parameter protocol was assessed both qualitatively and quantitatively. Patient-specific optimization achieved a statistically significant 41% increase in the GM-WM contrast ratio (P < 0.05) and 32% increase in the WML-WM contrast ratio (P < 0.01) compared with fixed-parameter FLAIR. The increase in WML-WM contrast ratio correlated strongly with echo time (P < 10 -11 ). Two experienced neuroradiologists indicated substantially higher lesion conspicuity on the patient-specific FLAIR images over conventional FLAIR in 3-4 cases (intrarater correlation coefficient ICC = 0.72). In no case was the image quality of patient-specific FLAIR considered inferior to conventional FLAIR by any of the raters (ICC = 0.32). Changes in proton density and relaxation times render fixed-parameter FLAIR suboptimal in terms of lesion contrast. Patient-specific optimization of 3D FLAIR increases lesion conspicuity without scan time penalty, and has potential to enhance the detection of subtle and small lesions in MS. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:557-564. © 2016 International Society for Magnetic Resonance in Medicine.

  2. The Palladiolibrary Geo-Models AN Open 3d Archive to Manage and Visualize Information-Communication Resources about Palladio

    Science.gov (United States)

    Apollonio, F. I.; Baldissini, S.; Clini, P.; Gaiani, M.; Palestini, C.; Trevisan, C.

    2013-07-01

    The paper describes objectives, methods, procedures and outcomes of the development of the digital archive of Palladio works and documentation: the PALLADIOLibrary of Centro Internazionale di Studi di Architettura Andrea Palladio di Vicenza (CISAAP). The core of the application consists of fifty-one reality-based 3D models usable and navigable within a system grounded on GoogleEarth. This information system, a collaboration of four universities bearers of specific skills returns a comprehensive, structured and coherent semantic interpretation of Palladian landscape through shapes realistically reconstructed from historical sources and surveys and treated for GE with Ambient Occlusion techniques, overcoming the traditional display mode.

  3. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    Science.gov (United States)

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  4. 3D-reconstructions and virtual 4D-visualization to study metamorphic brain development in the sphinx moth Manduca sexta

    Directory of Open Access Journals (Sweden)

    Wolf Huetteroth

    2010-03-01

    Full Text Available During metamorphosis, the transition from the larva to the adult, the insect brain undergoes considerable remodeling: New neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  5. EVALUATION OF THE USER STRATEGY ON 2D AND 3D CITY MAPS BASED ON NOVEL SCANPATH COMPARISON METHOD AND GRAPH VISUALIZATION

    Directory of Open Access Journals (Sweden)

    J. Dolezalova

    2016-06-01

    Full Text Available The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents’ task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc. or type of stimuli (2D, 3D map.

  6. Evaluation of the User Strategy on 2d and 3d City Maps Based on Novel Scanpath Comparison Method and Graph Visualization

    Science.gov (United States)

    Dolezalova, J.; Popelka, S.

    2016-06-01

    The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents' task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc.) or type of stimuli (2D, 3D map).

  7. Dynamic accommodative response to different visual stimuli (2D vs 3D) while watching television and while playing Nintendo 3DS console.

    Science.gov (United States)

    Oliveira, Sílvia; Jorge, Jorge; González-Méijome, José M

    2012-09-01

    The aim of the present study was to compare the accommodative response to the same visual content presented in two dimensions (2D) and stereoscopically in three dimensions (3D) while participants were either watching a television (TV) or Nintendo 3DS console. Twenty-two university students, with a mean age of 20.3 ± 2.0 years (mean ± S.D.), were recruited to participate in the TV experiment and fifteen, with a mean age of 20.1 ± 1.5 years took part in the Nintendo 3DS console study. The accommodative response was measured using a Grand Seiko WAM 5500 autorefractor. In the TV experiment, three conditions were used initially: the film was viewed in 2D mode (TV2D without glasses), the same sequence was watched in 2D whilst shutter-glasses were worn (TV2D with glasses) and the sequence was viewed in 3D mode (TV3D). Measurements were taken for 5 min in each condition, and these sections were sub-divided into ten 30-s segments to examine changes within the film. In addition, the accommodative response to three points of different disparity of one 3D frame was assessed for 30 s. In the Nintendo experiment, two conditions were employed - 2D viewing and stereoscopic 3D viewing. In the TV experiment no statistically significant differences were found between the accommodative response with TV2D without glasses (-0.38 ± 0.32D, mean ± S.D.) and TV3D (-0.37 ± 0.34D). Also, no differences were found between the various segments of the film, or between the accommodative response to different points of one frame (p > 0.05). A significant difference (p = 0.015) was found, however, between the TV2D with (-0.32 ± 0.32D) and without glasses (-0.38 ± 0.32D). In the Nintendo experiment the accommodative responses obtained in modes 2D (-2.57 ± 0.30D) and 3D (-2.49 ± 0.28D) were significantly different (paired t-test p = 0.03). The need to use shutter-glasses may affect the accommodative response during the viewing of displays, and the accommodative response when playing

  8. Visualization of 2-D and 3-D fields from its value in a finite number of points

    International Nuclear Information System (INIS)

    Dari, E.A.; Venere, M.J.

    1990-01-01

    This work describes a method for the visualization of two- and three-dimensional fields, given its value in a finite number of points. These data can be originated in experimental measurements, numerical results, or any other source. For the field interpolation, the space is divided into simplices (triangles or tetrahedrons), using the Watson algorithm to obtain the Delaunay triangulation. Inside each simplex, linear interpolation is assumed. The visualization is accomplished by means of Finite Elements post-processors, capable of handling unstructured meshes, which were also developed by the authors. (Author) [es

  9. Microtomographic images of rat's lumbar vertebra microstructure using 30 keV synchrotron X-rays: an analysis in terms of 3D visualization

    Science.gov (United States)

    Rao, D. V.; Takeda, T.; Kawakami, T.; Uesugi, K.; Tsuchiya, Y.; Wu, J.; Lwin, T. T.; Itai, Y.; Zeniya, T.; Yuasa, T.; Akatsuka, T.

    2004-05-01

    Microtomographic images of rat's lumbar vertebra of different age groups varying from 8, 56 and 78 weeks were obtained at 30 keV using synchrotron X-rays with a spatial resolution of 12 μm. The images are analyzed in terms of 3D visualization and micro-architecture. Density histogram of rat's lumbar vertebra is compared with test phantoms. Rat's lumbar volume and phantom volume are studied at different concentrations of hydroxyapatite with slice number. With the use of 2D slices, 3D images are reconstructed, in order to know the evolution and a state of decline of bone microstructure with aging. Cross-sectional μ-CT images shows that the bone of young rat has a fine trabecular microstructure while that of the old rat has large meshed structure.

  10. Microtomographic images of rat's lumbar vertebra microstructure using 30 keV synchrotron X-rays: an analysis in terms of 3D visualization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, D.V.; Takeda, T. E-mail: ttakeda@md.tsukuba.ac.jp; Kawakami, T.; Uesugi, K.; Tsuchiya, Y.; Wu, J.; Lwin, T.T.; Itai, Y.; Zeniya, T.; Yuasa, T.; Akatsuka, T

    2004-05-01

    Microtomographic images of rat's lumbar vertebra of different age groups varying from 8, 56 and 78 weeks were obtained at 30 keV using synchrotron X-rays with a spatial resolution of 12 {mu}m. The images are analyzed in terms of 3D visualization and micro-architecture. Density histogram of rat's lumbar vertebra is compared with test phantoms. Rat's lumbar volume and phantom volume are studied at different concentrations of hydroxyapatite with slice number. With the use of 2D slices, 3D images are reconstructed, in order to know the evolution and a state of decline of bone microstructure with aging. Cross-sectional {mu}-CT images shows that the bone of young rat has a fine trabecular microstructure while that of the old rat has large meshed structure.

  11. 3D micro-particle image modeling and its application in measurement resolution investigation for visual sensing based axial localization in an optical microscope

    International Nuclear Information System (INIS)

    Wang, Yuliang; Li, Xiaolai; Bi, Shusheng; Zhu, Xiaofeng; Liu, Jinhua

    2017-01-01

    Visual sensing based three dimensional (3D) particle localization in an optical microscope is important for both fundamental studies and practical applications. Compared with the lateral ( X and Y ) localization, it is more challenging to achieve a high resolution measurement of axial particle location. In this study, we aim to investigate the effect of different factors on axial measurement resolution through an analytical approach. Analytical models were developed to simulate 3D particle imaging in an optical microscope. A radius vector projection method was applied to convert the simulated particle images into radius vectors. With the obtained radius vectors, a term of axial changing rate was proposed to evaluate the measurement resolution of axial particle localization. Experiments were also conducted for comparison with that obtained through simulation. Moreover, with the proposed method, the effects of particle size on measurement resolution were discussed. The results show that the method provides an efficient approach to investigate the resolution of axial particle localization. (paper)

  12. 3D ARCHITECTURAL VIDEOMAPPING

    Directory of Open Access Journals (Sweden)

    R. Catanese

    2013-07-01

    Full Text Available 3D architectural mapping is a video projection technique that can be done with a survey of a chosen building in order to realize a perfect correspondence between its shapes and the images in projection. As a performative kind of audiovisual artifact, the real event of the 3D mapping is a combination of a registered video animation file with a real architecture. This new kind of visual art is becoming very popular and its big audience success testifies new expressive chances in the field of urban design. My case study has been experienced in Pisa for the Luminara feast in 2012.

  13. Images of soft materials: a 3D visualization of interior of the sample in terms of attenuation coefficient

    International Nuclear Information System (INIS)

    Golosio, B.; Brunetti, A.; Cesareo, R.; Amendolia, S.R.; Rao, D.V.; Seltzer, S.M.

    2001-01-01

    Images of soft materials are obtained using image intensifier based X-ray system (Rao et al., Nucl. Instr. and Meth. A 437 (1999) 141). The interior of the soft material is visualized using the novel software in order to know the distribution of attenuation coefficient in terms of density. The novel software is based mainly on graphical library and applicable to several operating systems without any change. It can be applied to several applications starting from biomedical to industries, for example, quality control. The results for walnut and brew tooth are presented as a set of images from the internal parts of the sample. A description of the principal parameters required for tomographic visualization is given and some results based on this technique are reported and discussed

  14. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser.

    Science.gov (United States)

    Müller, R Dietmar; Qin, Xiaodong; Sandwell, David T; Dutkiewicz, Adriana; Williams, Simon E; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2016-01-01

    The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org) is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry.

  15. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser.

    Directory of Open Access Journals (Sweden)

    R Dietmar Müller

    Full Text Available The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry.

  16. ARCHITECTURE DEGREE PROJECT: USE OF 3D TECHNOLOGY, MODELS AND AUGMENTED REALITY EXPERIENCE WITH VISUALLY IMPAIRED USERS

    Directory of Open Access Journals (Sweden)

    Isidro Navarro Delgado

    2012-04-01

    Full Text Available Web 3.0 technologies provide effective tools for interpreting architecture and culture in general. Thus, a project may have an emotional impact on people while also having a more widespread effect in society as a whole. This project defines a methodology for evaluating accessibility of architecture for people with visual disabilities and the application of this to visiting emblematic buildings such as the Basilica of the Holly Family in Barcelona, designed by the architect, Antoni Gaudí.

  17. Label-free 3D visualization of cellular and tissue structures in intact muscle with second and third harmonic generation microscopy.

    Directory of Open Access Journals (Sweden)

    Markus Rehberg

    Full Text Available Second and Third Harmonic Generation (SHG and THG microscopy is based on optical effects which are induced by specific inherent physical properties of a specimen. As a multi-photon laser scanning approach which is not based on fluorescence it combines the advantages of a label-free technique with restriction of signal generation to the focal plane, thus allowing high resolution 3D reconstruction of image volumes without out-of-focus background several hundred micrometers deep into the tissue. While in mammalian soft tissues SHG is mostly restricted to collagen fibers and striated muscle myosin, THG is induced at a large variety of structures, since it is generated at interfaces such as refraction index changes within the focal volume of the excitation laser. Besides, colorants such as hemoglobin can cause resonance enhancement, leading to intense THG signals. We applied SHG and THG microscopy to murine (Mus musculus muscles, an established model system for physiological research, to investigate their potential for label-free tissue imaging. In addition to collagen fibers and muscle fiber substructure, THG allowed us to visualize blood vessel walls and erythrocytes as well as white blood cells adhering to vessel walls, residing in or moving through the extravascular tissue. Moreover peripheral nerve fibers could be clearly identified. Structure down to the nuclear chromatin distribution was visualized in 3D and with more detail than obtainable by bright field microscopy. To our knowledge, most of these objects have not been visualized previously by THG or any label-free 3D approach. THG allows label-free microscopy with inherent optical sectioning and therefore may offer similar improvements compared to bright field microscopy as does confocal laser scanning microscopy compared to conventional fluorescence microscopy.

  18. Discapacidad visual y orientación urbana. Estudio piloto sobre planos táctiles producidos en Impresión 3D

    OpenAIRE

    Gual Ortí, Jaume; Puyuelo Cazorla, Marina; Lloveras Macià, Joaquim; Merino, Lola

    2012-01-01

    El trabajo aquí expuesto presenta un estudio piloto llevado a cabo en Barcelona con personas invidentes y deficientes visuales. El objetivo del mismo ha sido analizar el uso y la eficacia de los planos táctiles producidos mediante Impresión en 3D. Para ello se han empleado entrevistas estructuradas, observación directa, realización de mapas cognitivos y tareas con prototipos. De esta manera se ha tratado de profundizar en el valor instrumental y comunicativo de estos productos a la hora de in...

  19. Visualization system: animation of the dynamic evolution of the molecular hydrogen cloud during its gravitational collapse in 3D

    International Nuclear Information System (INIS)

    Duarte P, R.; Klapp E, J.; Arreaga D, G.

    2006-01-01

    The results of a group of numeric simulations and a region of interest form a molecular hydrogen cloud that collapses under the action of their own force of graveness. For they are believed it two models the constant one and the gaussian with the profile of the density of the initial cloud and a barotropic equation of state that it allows the iso thermic change to adiabatic. For each pattern two values of critical density are used, a spectra of density interferences, obtaining a binary system, tertiary or even a quaternary one. The necessary programs explained in the methodology to generate the visualizations of the models are generated. (Author)

  20. BioCichlid: central dogma-based 3D visualization system of time-course microarray data on a hierarchical biological network.

    Science.gov (United States)

    Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi

    2009-02-15

    BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.

  1. A MATLAB®-based program for 3D visualization of stratigraphic setting and subsidence evolution of sedimentary basins: example application to the Vienna Basin

    Science.gov (United States)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2015-04-01

    In recent years, 3D visualization of sedimentary basins has become increasingly popular. Stratigraphic and structural mapping is highly important to understand the internal setting of sedimentary basins. And subsequent subsidence analysis provides significant insights for basin evolution. This study focused on developing a simple and user-friendly program which allows geologists to analyze and model sedimentary basin data. The developed program is aimed at stratigraphic and subsidence modelling of sedimentary basins from wells or stratigraphic profile data. This program is mainly based on two numerical methods; surface interpolation and subsidence analysis. For surface visualization four different interpolation techniques (Linear, Natural, Cubic Spline, and Thin-Plate Spline) are provided in this program. The subsidence analysis consists of decompaction and backstripping techniques. The numerical methods are computed in MATLAB® which is a multi-paradigm numerical computing environment used extensively in academic, research, and industrial fields. This program consists of five main processing steps; 1) setup (study area and stratigraphic units), 2) loading of well data, 3) stratigraphic modelling (depth distribution and isopach plots), 4) subsidence parameter input, and 5) subsidence modelling (subsided depth and subsidence rate plots). The graphical user interface intuitively guides users through all process stages and provides tools to analyse and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the visualization results using the full range of available plot options in MATLAB. All functions of this program are illustrated with a case study of Miocene sediments in the Vienna Basin. The basin is an ideal place to test this program, because sufficient data is

  2. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    Science.gov (United States)

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and

  3. Visualization of morphological parenchymal changes in emphysema: Comparison of different MRI sequences to 3D-HRCT

    International Nuclear Information System (INIS)

    Ley-Zaporozhan, Julia; Ley, Sebastian; Eberhardt, Ralf; Kauczor, Hans-Ulrich; Heussel, Claus Peter

    2010-01-01

    Purpose: Thin-section CT is the modality of choice for morphological imaging the lung parenchyma, while proton-MRI might be used for functional assessment. However, the capability of MRI to visualize morphological parenchymal alterations in emphysema is undetermined. Thus, the aim of the study was to compare different MRI sequences with CT. Materials and methods: 22 patients suffering from emphysema underwent thin-section MSCT serving as a reference. MRI (1.5 T) was performed using three different sequences: T2-HASTE in coronal and axial orientation, T1-GRE (VIBE) in axial orientation before and after application of contrast media (ce). All datasets were evaluated by four chest radiologists in consensus for each sequence separately independent from CT. The severity of emphysema, leading type, bronchial wall thickening, fibrotic changes and nodules was analyzed visually on a lobar level. Results: The sensitivity for correct categorization of emphysema severity was 44%, 48% and 41% and the leading type of emphysema was identical to CT in 68%, 55% and 60%, for T2-HASTE, T1-VIBE and T1-ce-VIBE respectively. A bronchial wall thickening was found in 43 lobes in CT and was correctly seen in MRI in 42%, 33% and 26%. Of those 74 lobes presented with fibrotic changes in CT were correctly identified by MRI in 39%, 35% and 58%. Small nodules were mostly underdiagnosed in MRI. Conclusion: MRI matched the CT severity classification and leading type of emphysema in half of the cases. All sequences showed a similar diagnostic performance, however a combination of HASTE and ce-VIBE should be recommended.

  4. DEVELOPMENT OF A 3D WEBGIS SYSTEM FOR RETRIEVING AND VISUALIZING CITYGML DATA BASED ON THEIR GEOMETRIC AND SEMANTIC CHARACTERISTICS BY USING FREE AND OPEN SOURCE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    I. Pispidikis

    2016-10-01

    Full Text Available CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application’s primary objectives are a user-friendly interface and a fully open source development.

  5. Visualizing Angiogenesis by Multiphoton Microscopy In Vivo in Genetically Modified 3D-PLGA/nHAp Scaffold for Calvarial Critical Bone Defect Repair.

    Science.gov (United States)

    Li, Jian; Jahr, Holger; Zheng, Wei; Ren, Pei-Gen

    2017-09-07

    The reconstruction of critically sized bone defects remains a serious clinical problem because of poor angiogenesis within tissue-engineered scaffolds during repair, which gives rise to a lack of sufficient blood supply and causes necrosis of the new tissues. Rapid vascularization is a vital prerequisite for new tissue survival and integration with existing host tissue. The de novo generation of vasculature in scaffolds is one of the most important steps in making bone regeneration more efficient, allowing repairing tissue to grow into a scaffold. To tackle this problem, the genetic modification of a biomaterial scaffold is used to accelerate angiogenesis and osteogenesis. However, visualizing and tracking in vivo blood vessel formation in real-time and in three-dimensional (3D) scaffolds or new bone tissue is still an obstacle for bone tissue engineering. Multiphoton microscopy (MPM) is a novel bio-imaging modality that can acquire volumetric data from biological structures in a high-resolution and minimally-invasive manner. The objective of this study was to visualize angiogenesis with multiphoton microscopy in vivo in a genetically modified 3D-PLGA/nHAp scaffold for calvarial critical bone defect repair. PLGA/nHAp scaffolds were functionalized for the sustained delivery of a growth factor pdgf-b gene carrying lentiviral vectors (LV-pdgfb) in order to facilitate angiogenesis and to enhance bone regeneration. In a scaffold-implanted calvarial critical bone defect mouse model, the blood vessel areas (BVAs) in PHp scaffolds were significantly higher than in PH scaffolds. Additionally, the expression of pdgf-b and angiogenesis-related genes, vWF and VEGFR2, increased correspondingly. MicroCT analysis indicated that the new bone formation in the PHp group dramatically improved compared to the other groups. To our knowledge, this is the first time multiphoton microscopy was used in bone tissue-engineering to investigate angiogenesis in a 3D bio-degradable scaffold in

  6. Map Learning with a 3D Printed Interactive Small-Scale Model: Improvement of Space and Text Memorization in Visually Impaired Students

    Directory of Open Access Journals (Sweden)

    Stéphanie Giraud

    2017-06-01

    Full Text Available Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students’ autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs.

  7. Map Learning with a 3D Printed Interactive Small-Scale Model: Improvement of Space and Text Memorization in Visually Impaired Students.

    Science.gov (United States)

    Giraud, Stéphanie; Brock, Anke M; Macé, Marc J-M; Jouffrais, Christophe

    2017-01-01

    Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs) to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students' autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs) which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs.

  8. A 3-D Approach for Teaching and Learning about Surface Water Systems through Computational Thinking, Data Visualization and Physical Models

    Science.gov (United States)

    Caplan, B.; Morrison, A.; Moore, J. C.; Berkowitz, A. R.

    2017-12-01

    Understanding water is central to understanding environmental challenges. Scientists use `big data' and computational models to develop knowledge about the structure and function of complex systems, and to make predictions about changes in climate, weather, hydrology, and ecology. Large environmental systems-related data sets and simulation models are difficult for high school teachers and students to access and make sense of. Comp Hydro, a collaboration across four states and multiple school districts, integrates computational thinking and data-related science practices into water systems instruction to enhance development of scientific model-based reasoning, through curriculum, assessment and teacher professional development. Comp Hydro addresses the need for 1) teaching materials for using data and physical models of hydrological phenomena, 2) building teachers' and students' comfort or familiarity with data analysis and modeling, and 3) infusing the computational knowledge and practices necessary to model and visualize hydrologic processes into instruction. Comp Hydro teams in Baltimore, MD and Fort Collins, CO are integrating teaching about surface water systems into high school courses focusing on flooding (MD) and surface water reservoirs (CO). This interactive session will highlight the successes and challenges of our physical and simulation models in helping teachers and students develop proficiency with computational thinking about surface water. We also will share insights from comparing teacher-led vs. project-led development of curriculum and our simulations.

  9. Remote Sensing and GIS Applied to the Landscape for the Environmental Restoration of Urbanizations by Means of 3D Virtual Reconstruction and Visualization (Salamanca, Spain

    Directory of Open Access Journals (Sweden)

    Antonio Miguel Martínez-Graña

    2016-01-01

    Full Text Available The key focus of this paper is to establish a procedure that combines the use of Geographical Information Systems (GIS and remote sensing in order to achieve simulation and modeling of the landscape impact caused by construction. The procedure should be easily and inexpensively developed. With the aid of 3D virtual reconstruction and visualization, this paper proposes that the technologies of remote sensing and GIS can be applied to the landscape for post-urbanization environmental restoration. The goal is to create a rural zone in an urban development sector that integrates the residential areas and local infrastructure into the surrounding natural environment in order to measure the changes to the preliminary urban design. The units of the landscape are determined by means of two cartographic methods: (1 indirect, using the components of the landscape; and (2 direct methods, using the landscape’s elements. The visual basins are calculated for the most transited by the population points, while establishing the zones that present major impacts for the urbanization of their landscape. Based on this, the different construction types are distributed (one-family houses, blocks of houses, etc., selecting the types of plant masses either with ornamentals or integration depending on the zone; integrating water channels, creating a water channel in recirculation and green spaces and leisure time facilities. The techniques of remote sensing and GIS allow for the visualization and modeling of the urbanization in 3D, simulating the virtual reality of the infrastructure as well as the actions that need to be taken for restoration, thereby providing at a low cost an understanding of landscape integration before it takes place.

  10. Adjunct use of 3D-SSP analysis improves the ability to discriminate Alzheimer's disease from controls in visual inspection of brain SPECT. Examined by 16 inspectors belonging to 9 institutes

    International Nuclear Information System (INIS)

    Imabayashi, Etsuko; Matsuda, Hiroshi; Machida, Kikuo; Honda, Norinari; Matsumoto, Toru

    2004-01-01

    Sixteen physicians interpreted 99m Tc-ECD SPECT images in two sessions with or without 3D stereotactic surface projections (SSP) for 50 studies of Alzheimer's Disease (AD) patients and 40 studies of healthy volunteers. Mean areas under the receiver operating characteristic curves for visual interpretation of SPECT with 3D-SSP (0.778±0.060) was significantly greater than that for visual interpretation of SPECT alone (0.679±0.058). (author)

  11. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  13. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  14. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  15. S6-5: Visual Consciousness Tracked with Direct Intracranial Recording from Early and High-Level Visual Cortices in Humans and Monkeys

    Directory of Open Access Journals (Sweden)

    Naotsugu Tsuchiya

    2012-10-01

    Full Text Available Key insights about the neuronal correlates of consciousness have been gained by electrophysiological recording of single neurons from a particular area or by recording of indirect fMRI signals from the whole brain. However, if rapid interaction among neuronal populations in distant cortical areas is essential for consciousness, other methods such as intracranial electrocorticogram (ECoG that can attain both requirements are necessary. Here we report the results of ECoG experiments in three epilepsy patients and one monkey. We used Continuous Flash Suppression to investigate the neuronal activity when ‘invisible’ stimuli broke interocular suppression. We found that widespread activity in the visual cortex preceded up to 1–2 s before subjective reports of detection and that alpha-band activity in the visual cortex induced by the initial flashes predicted how long the suppression was going to last. We will discuss implication of these findings for the neuronal dynamics associated with consciousness.

  16. Visualizing 3-D microscopic specimens

    Science.gov (United States)

    Forsgren, Per-Ola; Majlof, Lars L.

    1992-06-01

    The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.

  17. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    Science.gov (United States)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  18. The GPlates Portal: Cloud-based interactive 3D and 4D visualization of global geological and geophysical data and models in a browser

    Science.gov (United States)

    Müller, Dietmar; Qin, Xiaodong; Sandwell, David; Dutkiewicz, Adriana; Williams, Simon; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2017-04-01

    stimulate teaching and learning and novel avenues of inquiry. This technology offers many future opportunities for providing additional functionality, especially on-the-fly big data analytics. Müller, R.D., Qin, X., Sandwell, D.T., Dutkiewicz, A., Williams, S.E., Flament, N., Maus, S. and Seton, M, 2016, The GPlates Portal: Cloud-based interactive 3D visualization of global geophysical and geological data in a web browser, PLoS ONE 11(3): e0150883. doi:10.1371/ journal.pone.0150883

  19. Separate visualization of endolymphatic space, perilymphatic space and bone by a single pulse sequence; 3D-inversion recovery imaging utilizing real reconstruction after intratympanic Gd-DTPA administration at 3 tesla

    International Nuclear Information System (INIS)

    Naganawa, Shinji; Satake, Hiroko; Kawamura, Minako; Fukatsu, Hiroshi; Sone, Michihiko; Nakashima, Tsutomu

    2008-01-01

    Twenty-four hours after intratympanic administration of gadolinium contrast material (Gd), the Gd was distributed mainly in the perilymphatic space. Three-dimensional FLAIR can differentiate endolymphatic space from perilymphatic space, but not from surrounding bone. The purpose of this study was to evaluate whether 3D inversion-recovery turbo spin echo (3D-IR TSE) with real reconstruction could separate the signals of perilymphatic space (positive value), endolymphatic space (negative value) and bone (near zero) by setting the inversion time between the null point of Gd-containing perilymph fluid and that of the endolymph fluid without Gd. Thirteen patients with clinically suspected endolymphatic hydrops underwent intratympanic Gd injection and were scanned at 3 T. A 3D FLAIR and 3D-IR TSE with real reconstruction were obtained. In all patients, low signal of endolymphatic space in the labyrinth on 3D FLAIR was observed in the anatomically appropriate position, and it showed negative signal on 3D-IR TSE. The low signal area of surrounding bone on 3D FLAIR showed near zero signal on 3D-IR TSE. Gd-containing perilymphatic space showed high signal on 3D-IR TSE. In conclusion, by optimizing the inversion time, endolymphatic space, perilymphatic space and surrounding bone can be separately visualized on a single image using a 3D-IR TSE with real reconstruction. (orig.)

  20. 3D Surgical Simulation

    Science.gov (United States)

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  1. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  2. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  3. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering; Die computerassistierte Operationsplanung in der Abdominalchirurgie des Kindes. 3D-Visualisierung mittels ''volume rendering'' in der MRT

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L. [Universitaetsklinikum Heidelberg (Germany). Kinderchirurgie; Troeger, J. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Schenk, J.P. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Universitaetsklinikum, Paediatrische Radiologie, Heidelberg (Germany)

    2006-08-15

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [German] Komplexe Operationen bei ausgepraegten pathologischen Veraenderungen anatomischer Strukturen des kindlichen Abdomens benoetigen eine exakte Operationsvorbereitung. 3D-Visualisierung und computerassistierte Operationsplanung anhand von CT-Daten finden fuer schwierige chirurgische Eingriffe bei Erwachsenen in zunehmendem Masse Anwendung. Aus strahlenhygienischen Gruenden und bei besserer Weichteildifferenzierung ist jedoch neben der Sonographie die Magnetresonanztomographie (MRT) bei Kindern das Diagnostikum der Wahl. Die 3D-Visualisierung dieser MRT-Daten ist dabei jedoch aufgrund vielfaeltiger Schwierigkeiten bisher nicht durchgefuehrt worden, obwohl sich das Gebiet embryonaler Fehlbildungen und Tumoren geradezu anbietet. Vorgestellt wird eine weiterentwickelte und an die Fragestellungen der abdominellen Kinderchirurgie angepasste, sehr leistungsstarke raycastingbasierte 3D-volume-rendering-Software (VG Studio Max 1

  4. 3-D Topo Surface Visualization of Acid-Base Species Distributions: Corner Buttes, Corner Pits, Curving Ridge Crests, and Dilution Plains

    Science.gov (United States)

    Smith, Garon C.; Hossain, Md Mainul

    2017-01-01

    Species TOPOS is a free software package for generating three-dimensional (3-D) topographic surfaces ("topos") for acid-base equilibrium studies. This upgrade adds 3-D species distribution topos to earlier surfaces that showed pH and buffer capacity behavior during titration and dilution procedures. It constructs topos by plotting…

  5. Visualization system: animation of the dynamic evolution of the molecular hydrogen cloud during its gravitational collapse in 3D; Sistema de visualizacion: animacion de la evolucion dinamica de la nube de hidrogeno molecular durante su colapso gravitacional en 3D

    Energy Technology Data Exchange (ETDEWEB)

    Duarte P, R.; Klapp E, J.; Arreaga D, G. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: rdp@nuclear.inin.mx

    2006-07-01

    The results of a group of numeric simulations and a region of interest form a molecular hydrogen cloud that collapses under action of their own force of graveness. For they are believed it two models the constant one and the gaussian with the profile of the density of the initial cloud and a barotropic equation of state that it allows the iso thermic change to adiabatic. For each pattern two values of critical density are used, a spectra of density interferences, obtaining a binary system, tertiary or even a quaternary one. The necessary programs explained in the methodology to generate the visualizations of the models are generated. (Author)

  6. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  7. 3D Terahertz Beam Profiling

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Strikwerda, Andrew; Jepsen, Peter Uhd

    2013-01-01

    We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam...

  8. Open 3D Projects

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  9. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  10. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  11. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    Science.gov (United States)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  12. Illustrating Mathematics using 3D Printers

    OpenAIRE

    Knill, Oliver; Slavkovsky, Elizabeth

    2013-01-01

    3D printing technology can help to visualize proofs in mathematics. In this document we aim to illustrate how 3D printing can help to visualize concepts and mathematical proofs. As already known to educators in ancient Greece, models allow to bring mathematics closer to the public. The new 3D printing technology makes the realization of such tools more accessible than ever. This is an updated version of a paper included in book Low-Cost 3D Printing for science, education and Sustainable Devel...

  13. New generation of 3D desktop computer interfaces

    Science.gov (United States)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  14. Refined 3d-3d correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; Loon, Mark van [Mathematical Institute, University of Oxford, Andrew Wiles Building,Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2017-04-28

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N=2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N=2 theories constructed from boundary conditions and interfaces in a 4d N=2{sup ∗} theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-’t Hooft loops in the 4d N=2{sup ∗} theory. In the presence of a mass parameter for the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  15. 3D Printing of Fluid Flow Structures

    OpenAIRE

    Taira, Kunihiko; Sun, Yiyang; Canuto, Daniel

    2017-01-01

    We discuss the use of 3D printing to physically visualize (materialize) fluid flow structures. Such 3D models can serve as a refreshing hands-on means to gain deeper physical insights into the formation of complex coherent structures in fluid flows. In this short paper, we present a general procedure for taking 3D flow field data and producing a file format that can be supplied to a 3D printer, with two examples of 3D printed flow structures. A sample code to perform this process is also prov...

  16. A 3d game in python

    OpenAIRE

    Xu, Minghui

    2014-01-01

    3D game has widely been accepted and loved by many game players. More and more different kinds of 3D games were developed to feed people’s needs. The most common programming language for development of 3D game is C++ nowadays. Python is a high-level scripting language. It is simple and clear. The concise syntax could speed up the development cycle. This project was to develop a 3D game using only Python. The game is about how a cat lives in the street. In order to live, the player need...

  17. A 3d-3d appetizer

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Du; Ye, Ke [Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA, 91125 (United States)

    2016-11-02

    We test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T[L(p,1)] and the partition function of complex Chern-Simons theory on L(p,1). In particular, for p=1, we show how the familiar S{sup 3} partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p,1)] becomes a constant independent of p. In addition, we study T[L(p,1)] on the squashed three-sphere S{sub b}{sup 3}. This enables us to see clearly, at the level of partition function, to what extent G{sub ℂ} complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.

  18. Model for 3D-visualization of streams and techno-economic estimate of locations for construction of small hydropower plants

    International Nuclear Information System (INIS)

    Izeiroski, Subija

    2012-01-01

    The main researches of this dissertation are focused to a development of a model for preliminary assesment of the hydro power potentials for small hydropower plants construction using Geographic Information System - GIS. For this purpose, in the first part of dissertation is developed a contemporary methodological approach for 3D- visualization of the land surface and river streams in a GIS platform. In the methodology approach, as input graphical data are used digitized maps in scale 1:25000, where each map covers an area of 10x14 km and consists of many layers with graphic data in shape (vector) format. Using GIS tools, from the input point and isohyetal contour data layers with different interpolation techniques have been obtained digital elevation model - DEM, which further is used for determination of additional graphic maps with useful land surface parameters such as: slope raster maps, hill shade models of the surface, different maps with hydrologic parameters and many others. The main focus of researches is directed toward the developing of contemporary methodological approaches based on GIS systems, for assessment of the hydropower potentials and selection of suitable location for small hydropower plant construction - SHPs, and especially in the mountainous hilly area that are rich with water resources. For this purpose it is done a practical analysis at a study area which encompasses the watershed area of the Brajchanska River at the east part of Prespa Lake. The main accent considering the analysis of suitable locations for SHP construction is set to the techno-engineering criteria, and in this context is made a topographic analysis regarding the slope (gradient) either of all as well of particular river streams. It is also made a hydrological analysis regarding the flow rates (discharges). The slope analysis is executed at a pixel (cell) level a swell as at a segment (line) level along a given stream. The slope value at segment level gives in GIS

  19. Automated 3-D Radiation Mapping

    International Nuclear Information System (INIS)

    Tarpinian, J. E.

    1991-01-01

    This work describes an automated radiation detection and imaging system which combines several state-of-the-art technologies to produce a portable but very powerful visualization tool for planning work in radiation environments. The system combines a radiation detection system, a computerized radiation imaging program, and computerized 3-D modeling to automatically locate and measurements are automatically collected and imaging techniques are used to produce colored, 'isodose' images of the measured radiation fields. The isodose lines from the images are then superimposed over the 3-D model of the area. The final display shows the various components in a room and their associated radiation fields. The use of an automated radiation detection system increases the quality of radiation survey obtained measurements. The additional use of a three-dimensional display allows easier visualization of the area and associated radiological conditions than two-dimensional sketches

  20. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    Science.gov (United States)

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  1. 3D future internet media

    CERN Document Server

    Dagiuklas, Tasos

    2014-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The main contributions are based on the results of the FP7 European Projects ROMEO, which focus on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the Future Internet (www.ict-romeo.eu). The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of constant video quality to both fixed and mobile users. ROMEO will design and develop hybrid-networking solutions that co...

  2. Novel 3D media technologies

    CERN Document Server

    Dagiuklas, Tasos

    2015-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcas...

  3. Technical feasibility of 2D-3D coregistration for visualization of self-expandable microstents to facilitate coil embolization of broad-based intracranial aneurysms: an in vitro study

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Gregor [University of Erlangen-Nuernberg, Department of Neuroradiology, Erlangen (Germany); Kreisklinikum Siegen, Department of Radiology and Neuroradiology, Siegen (Germany); Pfister, Marcus [Siemens AG, Healthcare Sector, Forchheim (Germany); Struffert, Tobias; Engelhorn, Tobias; Doelken, Marc; Doerfler, Arnd [University of Erlangen-Nuernberg, Department of Neuroradiology, Erlangen (Germany); Spiegel, Martin; Hornegger, Joachim [University of Erlangen, Department of Informatics 5, Erlangen (Germany)

    2009-12-15

    The use of self-expandable microstents for treatment of broad-based intracranial aneurysms is widely spread. However, poor fluoroscopic visibility of the stents remains disadvantageous during the coiling procedure. Flat detector angiographic computed tomography (ACT) provides high resolution imaging of microstents even though integration of this imaging modality in the neurointerventional workflow has not been widely reported. An acrylic glass model was used to simulate the situation of a broad-based sidewall aneurysm. After insertion of a self-expandable microstent, ACT was performed. The resulting 3D dataset of the Microstent was subsequently projected into a conventional 2D fluoroscopic roadmap. This 3D visualization of the stent supported the coil embolization procedure of the in vitro aneurysm. In vitro 2D-3D coregistration with integration of 3D ACT data of a self-expandable microstent in a conventional 2D roadmap is feasible. Unsatisfying stent visibility constrains clinical cases with complex parent vessel anatomy and challenging aneurysm geometry; hence, this technique potentially may be useful in such cases. In our opinion, the clinical feasibility and utility of this new technique should be verified in a clinical aneurysm embolization study series using 2D-3D coregistration. (orig.)

  4. Technical feasibility of 2D-3D coregistration for visualization of self-expandable microstents to facilitate coil embolization of broad-based intracranial aneurysms: an in vitro study

    International Nuclear Information System (INIS)

    Richter, Gregor; Pfister, Marcus; Struffert, Tobias; Engelhorn, Tobias; Doelken, Marc; Doerfler, Arnd; Spiegel, Martin; Hornegger, Joachim

    2009-01-01

    The use of self-expandable microstents for treatment of broad-based intracranial aneurysms is widely spread. However, poor fluoroscopic visibility of the stents remains disadvantageous during the coiling procedure. Flat detector angiographic computed tomography (ACT) provides high resolution imaging of microstents even though integration of this imaging modality in the neurointerventional workflow has not been widely reported. An acrylic glass model was used to simulate the situation of a broad-based sidewall aneurysm. After insertion of a self-expandable microstent, ACT was performed. The resulting 3D dataset of the Microstent was subsequently projected into a conventional 2D fluoroscopic roadmap. This 3D visualization of the stent supported the coil embolization procedure of the in vitro aneurysm. In vitro 2D-3D coregistration with integration of 3D ACT data of a self-expandable microstent in a conventional 2D roadmap is feasible. Unsatisfying stent visibility constrains clinical cases with complex parent vessel anatomy and challenging aneurysm geometry; hence, this technique potentially may be useful in such cases. In our opinion, the clinical feasibility and utility of this new technique should be verified in a clinical aneurysm embolization study series using 2D-3D coregistration. (orig.)

  5. 3-D volume rendering visualization for calculated distributions of diesel spray; Diesel funmu kyodo suchi keisan kekka no sanjigen volume rendering hyoji

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizaki, T; Imanishi, H; Nishida, K; Yamashita, H; Hiroyasu, H; Kaneda, K [Hiroshima University, Hiroshima (Japan)

    1997-10-01

    Three dimensional visualization technique based on volume rendering method has been developed in order to translate calculated results of diesel combustion simulation into realistically spray and flame images. This paper presents an overview of diesel combustion model which has been developed at Hiroshima University, a description of the three dimensional visualization technique, and some examples of spray and flame image generated by this visualization technique. 8 refs., 8 figs., 1 tab.

  6. ‘‘Lend a Hand’’ Project Helps Students: Improved Spatial Visualization Skills Through Engaging in Hands-On 3-D Printed Prosthetics Project During a 9th Grade Engineering Course

    OpenAIRE

    Smith, Shaunna; Talley, Kimberly

    2018-01-01

    Research shows that high spatial ability is linked to success and persistence in STEM. Empirical investigations often report a gender gap in favor of male students. The purpose of this research study was to assess changes to 9th grade engineering students’ spatial visualization skills through engagement in a nine-week collaborative 3-D printed prosthetics project embedded within their existing ‘‘Beginning Concepts of Engineering’’ course curriculum. Using concurrent mixed methods, this study ...

  7. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  8. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Underwater 3D filming

    Directory of Open Access Journals (Sweden)

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  10. Fault-related dolomitization in the Vajont Limestone (Southern Alps, Italy): photogrammetric 3D outcrop reconstruction, visualization with textured surfaces, and structural analysis

    OpenAIRE

    Bistacchi, Andrea; Balsamo, Fabrizio; Storti, Fabrizio; Mozafari, Mahtab; Swennen, Rudy; Solum, John; Taberner, Conxita

    2013-01-01

    The Vajont Gorge (Dolomiti Bellunesi, Italy) provides spectacular outcrops of Jurassic limestones (Vajont Limestone Formation) in which Mesozoic and Alpine faults and fracture corridors are continuously exposed. Some of these faults acted as conduits for fluids, resulting in structurally-controlled dolomitization of the Vajont Limestone, associated with significant porosity increase. We carried out a 3D surface characterization of the outcrops, combining high resolution topography and imaging...

  11. Creation, Visualization and 3D Printing of Online Collections of Three Timensional Educative Models with Low-Cost Technologies. Practical Case of Canarian Marine Fossil Heritage

    Directory of Open Access Journals (Sweden)

    Jose Luis SAORIN PÉREZ

    2016-12-01

    Full Text Available In many educational settings, the use of tangible objects is used to enhance learning (models, replicas of art works, fossils.... When knowledge is disseminated through virtual environments, sometimes, the value of these tangible objects is lost. The new low-cost technologies allow solving this problem, enabling teachers to include in their virtual classroom the access and manipulation of threedimensional objects. This article describes the process of creation and dissemination of a three-dimensional, interactive educational content for learning in a virtual environment. As a practical study, we have worked on the Canary marine fossil heritage. The fossils are used as tangible material in paleontology teaching, however they are not available for work outside the classroom. For this work, it has been digitized in 3D a selection of 18 fossils. 3D files obtained are available to students in an online environment, allowing download, multi-touch display and interaction on mobile devices. In addition, if the student prefers, they can print them using a 3D printer. Finally, there has been an experience with 70 university students who, after accessing to the online files, responded to a questionnaire to assess the made materials.

  12. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  13. Magmatic Systems in 3-D

    Science.gov (United States)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  14. Underwater 3D filming

    OpenAIRE

    Rinaldi, Roberto

    2014-01-01

    After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” ) and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Unde...

  15. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  16. DELTA 3D PRINTER

    Directory of Open Access Journals (Sweden)

    ȘOVĂILĂ Florin

    2016-07-01

    Full Text Available 3D printing is a very used process in industry, the generic name being “rapid prototyping”. The essential advantage of a 3D printer is that it allows the designers to produce a prototype in a very short time, which is tested and quickly remodeled, considerably reducing the required time to get from the prototype phase to the final product. At the same time, through this technique we can achieve components with very precise forms, complex pieces that, through classical methods, could have been accomplished only in a large amount of time. In this paper, there are presented the stages of a 3D model execution, also the physical achievement after of a Delta 3D printer after the model.

  17. Professional Papervision3D

    CERN Document Server

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  18. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    Science.gov (United States)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets

  19. Automated Visualization and Quantification of Spiral Artery Blood Flow Entering the First-Trimester Placenta, Using 3-