WorldWideScience

Sample records for volume rendering algorithm

  1. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  2. Local and Global Illumination in the Volume Rendering Integral

    Energy Technology Data Exchange (ETDEWEB)

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  3. Real-time volume rendering of digital medical images on an iOS device

    Science.gov (United States)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  4. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  5. Remote volume rendering pipeline for mHealth applications

    Science.gov (United States)

    Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald

    2014-03-01

    We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.

  6. Transform coding for hardware-accelerated volume rendering.

    Science.gov (United States)

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  7. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  8. Real-time 3-dimensional fetal echocardiography with an instantaneous volume-rendered display: early description and pictorial essay.

    Science.gov (United States)

    Sklansky, Mark S; DeVore, Greggory R; Wong, Pierre C

    2004-02-01

    Random fetal motion, rapid fetal heart rates, and cumbersome processing algorithms have limited reconstructive approaches to 3-dimensional fetal cardiac imaging. Given the recent development of real-time, instantaneous volume-rendered sonographic displays of volume data, we sought to apply this technology to fetal cardiac imaging. We obtained 1 to 6 volume data sets on each of 30 fetal hearts referred for formal fetal echocardiography. Each volume data set was acquired over 2 to 8 seconds and stored on the system's hard drive. Rendered images were subsequently processed to optimize translucency, smoothing, and orientation and cropped to reveal "surgeon's eye views" of clinically relevant anatomic structures. Qualitative comparison was made with conventional fetal echocardiography for each subject. Volume-rendered displays identified all major abnormalities but failed to identify small ventricular septal defects in 2 patients. Important planes and views not visualized during the actual scans were generated with minimal processing of rendered image displays. Volume-rendered displays tended to have slightly inferior image quality compared with conventional 2-dimensional images. Real-time 3-dimensional echocardiography with instantaneous volume-rendered displays of the fetal heart represents a new approach to fetal cardiac imaging with tremendous clinical potential.

  9. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus

    2014-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

  10. Efficient visibility encoding for dynamic illumination in direct volume rendering.

    Science.gov (United States)

    Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas

    2012-03-01

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.

  11. Development of volume rendering module for real-time visualization system

    International Nuclear Information System (INIS)

    Otani, Takayuki; Muramatsu, Kazuhiro

    2000-03-01

    Volume rendering is a method to visualize the distribution of physical quantities in the three dimensional space from any viewpoint by tracing the ray direction on the ordinary two dimensional monitoring display. It enables to provide the interior information as well as the surfacial one by producing the translucent images. Therefore, it is regarded as a very useful means as well as an important one in the analysis of the computational results of the scientific calculations, although it has, unfortunately, disadvantage to need a large amount of computing time. This report describes algorithm and its performance of the volume rendering soft-ware which was developed as an important functional module in the real-time visualization system PATRAS. This module can directly visualize the computed results on BFC grid. Moreover, it has already realized the speed-up in some parts of the software by the use of a newly developed heuristic technique. This report includes the investigation on the speed-up of the software by parallel processing. (author)

  12. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  13. Technical analysis of volume-rendering algorithms: application in low-contrast structures using liver vascularisation as a model

    International Nuclear Information System (INIS)

    Cademartiri, Filippo; Luccichenti, Giacomo; Runza, Giuseppe; Bartolotta, Tommaso Vincenzo; Midiri, Massimo; Gualerzi, Massimo; Brambilla, Lorenzo; Coruzzi, Paolo; Soliani, Paolo; Sianesi, Mario

    2005-01-01

    Purpose: To assess the influence of pre-set volume rendering opacity curves (OC) on image quality and to identify which absolute parameters (density of aorta, hepatic parenchyma and portal vein) affect visualization of portal vascular structures (low-contrast structures). Materials and methods: Twenty-two patients underwent a dual-phase spiral CT with the following parameters: collimation 3 mm, pitch 2, increment 1 mm. Three scans were performed: one without contrast medium and the latter two after the injection of contrast material (conventionally identified as 'arterial' and 'portal'). The images were sent to a workstation running on an NT platform equipped with post-processing software allowing three-dimensional (3D) reconstructions to generate volume-rendered images of the vascular supply to the liver. Correlation between the absolute values of aorta, liver and portal vein density, OC parameters, and image quality were assessed. Results: 3D images generated using pre-set OC obtained a much mower overall quality score than those produced with OC set by the operator. High contrast between the liver and the portal vein, for example during the portal vascular phase, allows wider windows, thus improving image quality. Conversely, the OC in the parenchymal phase scans must have a high gradient in order to better differentiate between the vascular structures and the surrounding hepatic parenchyma. Conclusions: Image features considered to be of interest by the operator cannot be simplified by the mean of pre-set OC. Due to their strong individual variability automatic 3D algorithms cannot be universally applied: they should be adapted to both image and patient characteristics [it

  14. Volume rendering in treatment planning for moving targets

    Energy Technology Data Exchange (ETDEWEB)

    Gemmel, Alexander [GSI-Biophysics, Darmstadt (Germany); Massachusetts General Hospital, Boston (United States); Wolfgang, John A.; Chen, George T.Y. [Massachusetts General Hospital, Boston (United States)

    2009-07-01

    Advances in computer technologies have facilitated the development of tools for 3-dimensional visualization of CT-data sets with volume rendering. The company Fovia has introduced a high definition volume rendering engine (HDVR trademark by Fovia Inc., Palo Alto, USA) that is capable of representing large CT data sets with high user interactivity even on standard PCs. Fovia provides a software development kit (SDK) that offers control of all the features of the rendering engine. We extended the SDK by functionalities specific to the task of treatment planning for moving tumors. This included navigation of the patient's anatomy in beam's eye view, fast point-and-click measurement of lung tumor trajectories as well as estimation of range perturbations due to motion by calculation of (differential) water equivalent path lengths for protons and carbon ions on 4D-CT data sets. We present patient examples to demonstrate the advantages and disadvantages of volume rendered images as compared to standard 2-dimensional axial plane images. Furthermore, we show an example of a range perturbation analysis. We conclude that volume rendering is a powerful technique for the representation and analysis of large time resolved data sets in treatment planning.

  15. Color-coded volume rendering for three-dimensional reconstructions of CT data

    International Nuclear Information System (INIS)

    Rieker, O.; Mildenberger, P.; Thelen, M.

    1999-01-01

    Purpose: To evaluate a technique of colored three-dimensional reconstructions without segmentation. Material and methods: Color-coded volume rendered images were reconstructed from the volume data of 25 thoracic, abdominal, musculoskeletal, and vascular helical CT scans using commercial software. The CT volume rendered voxels were encoded with color in the following manner. Opacity, hue, lightness, and chroma were assigned to each of four classes defined by CT number. Color-coded reconstructions were compared to the corresponding grey-scale coded reconstructions. Results: Color-coded volume rendering enabled realistic visualization of pathologic findings when there was sufficient difference in CT density. Segmentation was necessary in some cases to demonstrate small details in a complex volume. Conclusion: Color-coded volume rendering allowed lifelike visualisation of CT volumes without the need of segmentation in most cases. (orig.) [de

  16. Wobbled splatting-a fast perspective volume rendering method for simulation of x-ray images from CT

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-01-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs-which are perspective summed voxel renderings-is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB. (note)

  17. Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.

    Science.gov (United States)

    Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K

    1995-07-01

    The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.

  18. Functionality and Performance Visualization of the Distributed High Quality Volume Renderer (HVR)

    KAUST Repository

    Shaheen, Sara

    2012-07-01

    Volume rendering systems are designed to provide means to enable scientists and a variety of experts to interactively explore volume data through 3D views of the volume. However, volume rendering techniques are computationally intensive tasks. Moreover, parallel distributed volume rendering systems and multi-threading architectures were suggested as natural solutions to provide an acceptable volume rendering performance for very large volume data sizes, such as Electron Microscopy data (EM). This in turn adds another level of complexity when developing and manipulating volume rendering systems. Given that distributed parallel volume rendering systems are among the most complex systems to develop, trace and debug, it is obvious that traditional debugging tools do not provide enough support. As a consequence, there is a great demand to provide tools that are able to facilitate the manipulation of such systems. This can be achieved by utilizing the power of compute graphics in designing visual representations that reflect how the system works and that visualize the current performance state of the system.The work presented is categorized within the field of software Visualization, where Visualization is used to serve visualizing and understanding various software. In this thesis, a number of visual representations that reflect a number of functionality and performance aspects of the distributed HVR, a high quality volume renderer system that uses various techniques to visualize large volume sizes interactively. This work is provided to visualize different stages of the parallel volume rendering pipeline of HVR. This is along with means of performance analysis through a number of flexible and dynamic visualizations that reflect the current state of the system and enables manipulation of them at runtime. Those visualization are aimed to facilitate debugging, understanding and analyzing the distributed HVR.

  19. Advantages and disadvantages of 3D ultrasound of thyroid nodules including thin slice volume rendering

    Directory of Open Access Journals (Sweden)

    Slapa Rafal

    2011-01-01

    Full Text Available Abstract Background The purpose of this study was to assess the advantages and disadvantages of 3D gray-scale and power Doppler ultrasound, including thin slice volume rendering (TSVR, applied for evaluation of thyroid nodules. Methods The retrospective evaluation by two observers of volumes of 71 thyroid nodules (55 benign, 16 cancers was performed using a new TSVR technique. Dedicated 4D ultrasound scanner with an automatic 6-12 MHz 4D probe was used. Statistical analysis was performed with Stata v. 8.2. Results Multiple logistic regression analysis demonstrated that independent risk factors of thyroid cancers identified by 3D ultrasound include: (a ill-defined borders of the nodule on MPR presentation, (b a lobulated shape of the nodule in the c-plane and (c a density of central vessels in the nodule within the minimal or maximal ranges. Combination of features provided sensitivity 100% and specificity 60-69% for thyroid cancer. Calcification/microcalcification-like echogenic foci on 3D ultrasound proved not to be a risk factor of thyroid cancer. Storage of the 3D data of the whole nodules enabled subsequent evaluation of new parameters and with new rendering algorithms. Conclusions Our results indicate that 3D ultrasound is a practical and reproducible method for the evaluation of thyroid nodules. 3D ultrasound stores volumes comprising the whole lesion or organ. Future detailed evaluations of the data are possible, looking for features that were not fully appreciated at the time of collection or applying new algorithms for volume rendering in order to gain important information. Three-dimensional ultrasound data could be included in thyroid cancer databases. Further multicenter large scale studies are warranted.

  20. Fast algorithm for the rendering of three-dimensional surfaces

    Science.gov (United States)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  1. Anisotropic 3D texture synthesis with application to volume rendering

    DEFF Research Database (Denmark)

    Laursen, Lasse Farnung; Ersbøll, Bjarne Kjær; Bærentzen, Jakob Andreas

    2011-01-01

    images using a 12.1 megapixel camera. Next, we extend the volume rendering pipeline by creating a transfer function which yields not only color and opacity from the input intensity, but also texture coordinates for our synthesized 3D texture. Thus, we add texture to the volume rendered images....... This method is applied to a high quality visualization of a pig carcass, where samples of meat, bone, and fat have been used to produce the anisotropic 3D textures....

  2. Graphical User Interfaces for Volume Rendering Applications in Medical Imaging

    OpenAIRE

    Lindfors, Lisa; Lindmark, Hanna

    2002-01-01

    Volume rendering applications are used in medical imaging in order to facilitate the analysis of three-dimensional image data. This study focuses on how to improve the usability of graphical user interfaces of these systems, by gathering user requirements. This is achieved by evaluations of existing systems, together with interviews and observations at clinics in Sweden that use volume rendering to some extent. The usability of the applications of today is not sufficient, according to the use...

  3. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias; Pascal Grosset, A.V.; Martin, Tobias; Pegoraro, Vincent; Smith, Sean T.; Hansen, Charles D.

    2011-01-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  4. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias

    2011-06-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  5. Dynamic Resolution in GPU-Accelerated Volume Rendering to Autostereoscopic Multiview Lenticular Displays

    Directory of Open Access Journals (Sweden)

    Daniel Ruijters

    2008-09-01

    Full Text Available The generation of multiview stereoscopic images of large volume rendered data demands an enormous amount of calculations. We propose a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout. The method is based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiview lenticular screen in a second pass. This compositing phase is facilitated by the fact that the view assignment per subpixel is static, and therefore can be precomputed. We decoupled the resolution of the individual views from the resolution of the composited signal, and adjust the resolution on-the-fly, depending on the available processing resources, in order to maintain interactive refresh rates. The optimal resolution for the volume rendered views is determined by means of an analysis of the lattice of the output signal for the lenticular screen in the Fourier domain.

  6. Haptic rendering foundations, algorithms, and applications

    CERN Document Server

    Lin, Ming C

    2008-01-01

    For a long time, human beings have dreamed of a virtual world where it is possible to interact with synthetic entities as if they were real. It has been shown that the ability to touch virtual objects increases the sense of presence in virtual environments. This book provides an authoritative overview of state-of-theart haptic rendering algorithms and their applications. The authors examine various approaches and techniques for designing touch-enabled interfaces for a number of applications, including medical training, model design, and maintainability analysis for virtual prototyping, scienti

  7. Mucosal detail at CT virtual reality: surface versus volume rendering.

    Science.gov (United States)

    Hopper, K D; Iyriboz, A T; Wise, S W; Neuman, J D; Mauger, D T; Kasales, C J

    2000-02-01

    To evaluate computed tomographic virtual reality with volumetric versus surface rendering. Virtual reality images were reconstructed for 27 normal or pathologic colonic, gastric, or bronchial structures in four ways: the transition zone (a) reconstructed separately from the wall by using volume rendering; (b) with attenuation equal to air; (c) with attenuation equal to wall (soft tissue); (d) with attenuation halfway between air and wall. The four reconstructed images were randomized. Four experienced imagers blinded to the reconstruction graded them from best to worst with predetermined criteria. All readers rated images with the transition zone as a separate structure as overwhelmingly superior (P Virtual reality is best with volume rendering, with the transition zone (mucosa) between the wall and air reconstructed as a separate structure.

  8. In Vivo CT Direct Volume Rendering: A Three-Dimensional Anatomical Description of the Heart

    International Nuclear Information System (INIS)

    Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Cacciola, Alberto; Cinquegrani, Maria; Duca, Antonio; Rizzo, Giuseppina; Alati, Emanuela; Gaeta, Michele; Milardi, Demetrio

    2016-01-01

    Since cardiac anatomy continues to play an important role in the practice of medicine and in the development of medical devices, the study of the heart in three dimensions is particularly useful to understand its real structure, function and proper location in the body. This study demonstrates a fine use of direct volume rendering, processing the data set images obtained by Computed Tomography (CT) of the heart of 5 subjects with age range between 18 and 42 years (2 male, 3 female), with no history of any overt cardiac disease. The cardiac structure in CT images was first extracted from the thorax by marking manually the regions of interest on the computer, and then it was stacked to create new volumetric data. The use of a specific algorithm allowed us to observe with a good perception of depth the heart and the skeleton of the thorax at the same time. Besides, in all examined subjects, it was possible to depict its structure and its position within the body and to study the integrity of papillary muscles, the fibrous tissue of cardiac valve and chordae tendineae and the course of coronary arteries. Our results demonstrated that one of the greatest advantages of algorithmic modifications of direct volume rendering parameters is that this method provides much necessary information in a single radiologic study. It implies a better accuracy in the study of the heart, being complementary to other diagnostic methods and facilitating the therapeutic plans

  9. Technical analysis of volume-rendering algorithms: application in low-contrast structures using liver vascularisation as a model; Analisi tecnica degli algoritmi di volume rendering: applicazione alle strutture a basso contrsto usando come modello la vascolarizzazione epatica

    Energy Technology Data Exchange (ETDEWEB)

    Cademartiri, Filippo [Erasmus Medical Center, Rotterdam (Netherlands); Luccichenti, Giacomo [Fondazione Biomedica Europea ONLUS, Roma (Italy); Runza, Giuseppe; Bartolotta, Tommaso Vincenzo; Midiri, Massimo [Palermo Univ., Palermo (Italy). Sezione di scienze radiologiche; Gualerzi, Massimo; Brambilla, Lorenzo; Coruzzi, Paolo [Parma Univ., Parma (Italy). UO di prevenzione e riabilitazione vascolare, Fondazione Don C. Gnocchi ONLUS; Soliani, Paolo; Sianesi, Mario [Parma Univ., Parma (Italy). Dipartimento di chirurgia

    2005-04-01

    Purpose: To assess the influence of pre-set volume rendering opacity curves (OC) on image quality and to identify which absolute parameters (density of aorta, hepatic parenchyma and portal vein) affect visualization of portal vascular structures (low-contrast structures). Materials and methods: Twenty-two patients underwent a dual-phase spiral CT with the following parameters: collimation 3 mm, pitch 2, increment 1 mm. Three scans were performed: one without contrast medium and the latter two after the injection of contrast material (conventionally identified as 'arterial' and 'portal'). The images were sent to a workstation running on an NT platform equipped with post-processing software allowing three-dimensional (3D) reconstructions to generate volume-rendered images of the vascular supply to the liver. Correlation between the absolute values of aorta, liver and portal vein density, OC parameters, and image quality were assessed. Results: 3D images generated using pre-set OC obtained a much mower overall quality score than those produced with OC set by the operator. High contrast between the liver and the portal vein, for example during the portal vascular phase, allows wider windows, thus improving image quality. Conversely, the OC in the parenchymal phase scans must have a high gradient in order to better differentiate between the vascular structures and the surrounding hepatic parenchyma. Conclusions: Image features considered to be of interest by the operator cannot be simplified by the mean of pre-set OC. Due to their strong individual variability automatic 3D algorithms cannot be universally applied: they should be adapted to both image and patient characteristics. [Italian] Scopo: Valutare l'influenza delle curve di opacit� (CO) preimpostate del volume-rendering sulla qualit� delle immagini, ed identificare quali parametri assoluti (attenzione dell'aorta, del parenchima epatico e della vena porta) influenzano la

  10. On the design of a real-time volume rendering engine

    NARCIS (Netherlands)

    Smit, Jaap; Wessels, H.L.F.; van der Horst, A.; Bentum, Marinus Jan

    1992-01-01

    An architecture for a Real-Time Volume Rendering Engine (RT-VRE) is given, capable of computing 750 × 750 × 512 samples from a 3D dataset at a rate of 25 images per second. The RT-VRE uses for this purpose 64 dedicated rendering chips, cooperating with 16 RISC-processors. A plane interpolator

  11. On the design of a real-time volume rendering engine

    NARCIS (Netherlands)

    Smit, Jaap; Wessels, H.J.; van der Horst, A.; Bentum, Marinus Jan

    1995-01-01

    An architecture for a Real-Time Volume Rendering Engine (RT-VRE) is given, capable of computing 750 × 750 × 512 samples from a 3D dataset at a rate of 25 images per second. The RT-VRE uses for this purpose 64 dedicated rendering chips, cooperating with 16 RISC-processors. A plane interpolator

  12. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    Science.gov (United States)

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  13. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  14. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    Science.gov (United States)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  15. Frequency Analysis of Gradient Estimators in Volume Rendering

    NARCIS (Netherlands)

    Bentum, Marinus Jan; Lichtenbelt, Barthold B.A.; Malzbender, Tom

    1996-01-01

    Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary

  16. Immersive volume rendering of blood vessels

    Science.gov (United States)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  17. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  18. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  19. A concept of volume rendering guided search process to analyze medical data set.

    Science.gov (United States)

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.

  20. Use of volume-rendered images in registration of nuclear medicine studies

    International Nuclear Information System (INIS)

    Wallis, J.W.; Miller, T.R.; Hsu, S.S.

    1995-01-01

    A simple operator-guided alignment technique based on volume-rendered images was developed to register tomographic nuclear medicine studies. For each of 2 three-dimensional data sets to be registered, volume-rendered images were generated in 3 orthogonal projections (x,y,z) using the method of maximum-activity projection. Registration was achieved as follows: (a) One of the rendering orientations (e.g. x) was chosen for manipulation; (b) The two dimensional rendering was translated and rotated under operator control to achieve the best alignment as determined by visual assessment; (c) This rotation and translation was then applied to the underlying three-dimensional data set, with updating of the rendered images in each of the orthogonal projections; (d) Another orientation was chosen, and the process repeated. Since manipulation was performed on the small two-dimensional rendered image, feedback was instantaneous. To aid in the visual alignment, difference images and flicker images (toggling between the two data sets) were displayed. Accuracy was assessed by analysis of separate clinical data sets acquired without patient movement. After arbitrary rotation and translation of one of the two data sets, the 2 data sets were registered. Mean registration error was 0.36 pixels, corresponding to a 2.44 mm registration error. Thus, accurate registration can be achieved in under 10 minutes using this simple technique. The accuracy of registration was assessed with use of duplicate SPECT studies originating from separate reconstructions of the data from each of the detectors of a triple-head gamma camera

  1. Lighting design for globally illuminated volume rendering.

    Science.gov (United States)

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  2. Morphological pyramids in multiresolution MIP rendering of large volume data : Survey and new results

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    We survey and extend nonlinear signal decompositions based on morphological pyramids, and their application to multiresolution maximum intensity projection (MIP) volume rendering with progressive refinement and perfect reconstruction. The structure of the resulting multiresolution rendering

  3. An Extension of Fourier-Wavelet Volume Rendering by View Interpolation

    NARCIS (Netherlands)

    Westenberg, Michel A.; Roerdink, Jos B.T.M.

    2001-01-01

    This paper describes an extension to Fourier-wavelet volume rendering (FWVR), which is a Fourier domain implementation of the wavelet X-ray transform. This transform combines integration along the line of sight with a simultaneous 2-D wavelet transform in the view plane perpendicular to this line.

  4. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  5. 3-D volume rendering visualization for calculated distributions of diesel spray; Diesel funmu kyodo suchi keisan kekka no sanjigen volume rendering hyoji

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizaki, T; Imanishi, H; Nishida, K; Yamashita, H; Hiroyasu, H; Kaneda, K [Hiroshima University, Hiroshima (Japan)

    1997-10-01

    Three dimensional visualization technique based on volume rendering method has been developed in order to translate calculated results of diesel combustion simulation into realistically spray and flame images. This paper presents an overview of diesel combustion model which has been developed at Hiroshima University, a description of the three dimensional visualization technique, and some examples of spray and flame image generated by this visualization technique. 8 refs., 8 figs., 1 tab.

  6. Using neutrosophic graph cut segmentation algorithm for qualified rendering image selection in thyroid elastography video.

    Science.gov (United States)

    Guo, Yanhui; Jiang, Shuang-Quan; Sun, Baiqing; Siuly, Siuly; Şengür, Abdulkadir; Tian, Jia-Wei

    2017-12-01

    Recently, elastography has become very popular in clinical investigation for thyroid cancer detection and diagnosis. In elastogram, the stress results of the thyroid are displayed using pseudo colors. Due to variation of the rendering results in different frames, it is difficult for radiologists to manually select the qualified frame image quickly and efficiently. The purpose of this study is to find the qualified rendering result in the thyroid elastogram. This paper employs an efficient thyroid ultrasound image segmentation algorithm based on neutrosophic graph cut to find the qualified rendering images. Firstly, a thyroid ultrasound image is mapped into neutrosophic set, and an indeterminacy filter is constructed to reduce the indeterminacy of the spatial and intensity information in the image. A graph is defined on the image and the weight for each pixel is represented using the value after indeterminacy filtering. The segmentation results are obtained using a maximum-flow algorithm on the graph. Then the anatomic structure is identified in thyroid ultrasound image. Finally the rendering colors on these anatomic regions are extracted and validated to find the frames which satisfy the selection criteria. To test the performance of the proposed method, a thyroid elastogram dataset is built and totally 33 cases were collected. An experienced radiologist manually evaluates the selection results of the proposed method. Experimental results demonstrate that the proposed method finds the qualified rendering frame with 100% accuracy. The proposed scheme assists the radiologists to diagnose the thyroid diseases using the qualified rendering images.

  7. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  8. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    Science.gov (United States)

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  9. Diagnostic Accuracy of the Volume Rendering Images of Multi-Detector CT for the Detection of Lumbar Transverse Process Fractures

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yun Hak; Chun, Tong Jin [Dept. of Radiology, Eulji University Hospital, Daejeon (Korea, Republic of)

    2012-01-15

    To compare the accuracy of three-dimensional computed tomographic (3D CT) volume rendering techniques with axial images of multi-detector row computed tomography to identify lumbar transverse process (LTP) fractures in trauma patients. We retrospectively evaluated 42 patients with back pain as a result of blunt trauma between January and June of 2010. Two radiologists examined the 3D CT volume rendering images independently. The confirmation of a LTP fracture was based on the consensus of the axial images by the two radiologists. The results of 3D CT volume rendering images were compared with the axial images and the diagnostic powers (sensitivity, specificity, and accuracy) were calculated. Seven of the 42 patients had twenty five lumbar transverse process fractures. The diagnostic power of the 3D CT volume rendering technique is as accurate as axial images. Reader 1, sensitivity 96%, specificity 100%, accuracy 99.9%; and Reader 2 sensitivity 100%, specificity 99.8%, accuracy 99.8%. The accordance of the two radiologists was 99.8%. 3D CT volume rendering images can alternate axial images to detect lumbar transverse process fractures with good image quality.

  10. An interactive tool for CT volume rendering and sagittal plane-picking of the prostate for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Jani, Ashesh B.; Pelizzari, Charles A.; Chen, George T.Y.; Grzezcszuk, Robert P.; Vijayakumar, Srinivasan

    1997-01-01

    Objective: Accurate and precise target volume and critical structure definition is a basic necessity in radiotherapy. The prostate, particularly the apex (an important potential site of recurrence in prostate cancer patients), is a challenging structure to define using any modality, including conventional axial CT. Invasive or expensive techniques, such as retrograde urethrography or MRI, could be avoided if localization of the prostate were possible using information already available on the planning CT. Our primary objective was to build a software tool to determine whether volume rendering and sagittal plane-picking, which are CT-based, noninvasive visualization techniques, were of utility in radiotherapy treatment planning for the prostate. Methods: Using AVS (Application Visualization System) on a Silicon Graphics Indigo 2 High Impact workstation, we have developed a tool that enables the clinician to efficiently navigate a CT volume and to use volume rendering and sagittal plane-picking to better define structures at any anatomic site. We applied the tool to the specific example of the prostate to compare the two visualization techniques with the current standard of axial CT. The prostate was defined on 80-slice CT scans (scanning thickness 4mm, pixel size 2mm x 2mm) of prostate cancer patients using axial CT images, volume-rendered CT images, and sagittal plane-picked images. Results: The navigation of the prostate using the different visualization techniques qualitatively demonstrated that the sagittal plane-picked images, and even more so the volume-rendered images, revealed the prostate (particularly the lower border) better in relationship to the surrounding regional anatomy (bladder, rectum, pelvis, and penile structures) than did the axial images. A quantitative comparison of the target volumes obtained by navigating using the different visualization techniques demonstrated that, when compared to the prostate volume defined on axial CT, a larger volume

  11. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    OpenAIRE

    Carlos Jiménez de Parga; Sebastián Rubén Gómez Palomo

    2018-01-01

    This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which ...

  12. Visualization of normal and abnormal inner ear with volume rendering technique using multislice spiral CT

    International Nuclear Information System (INIS)

    Ma Hui; Han Ping; Liang Bo; Lei Ziqiao; Liu Fang; Tian Zhiliang

    2006-01-01

    Objective: To evaluate the ability of the volume rendering technique to display the normal and abnormal inner ear structures. Methods: Forty normal earand 61 abnormal inner ears (40 congenital inner ear malformations, 7 labyrinthitis ossificans, and 14 inner ear erosion caused by cholesteatomas) were examined with a MSCT scanner. Axial imaging were performed using the following parameters: 120 kV, 100 mAs, 0.75 mm slice thickness, a pitch factor of 1. The axial images of interested ears were reconstructed with 0.1 mm reconstruction increment and a FOV of 50 mm. The 3D reconstructions were done with volume rendering technique on the workstation. Results: In the subjects without ear disorders a high quality 3D visualization of the inner ear could be achieved. In the patients with inner ear' disorders all inner ear malformations could be clearly displayed on 3D images as follows: (1) Michel deformity (one ear): There was complete absence of all cochlear and vestibular structures. (2) common cavity deformity (3 ears): The cochlea and vestibule were represented by a cystic cavity and couldn't be differentiated from each other. (3)incomplete partition type I (3 ears): The cochlea lacked the entire modiolus and cribriform area, resulting in a cystic appearance. (4) incomplete partition type II (Mondini deformity) (5 ears): The cochlea consisted of 1.5 turns, in which the middle and apical turns coalesced to form a cystic apex. (5) vestibular and semicircular canal malformations (14 ears): Cochlea was normal, vestibule dilated, semicircular canals were absent, hypoplastic or enlarged. (6) dilated vestibular aqueduct (14 ears): The vestibular aqueduct was bell-mouthed. In 7 patients with labyrinthifis ossificans, 3D images failed to clearly show the completeinner ears in 4 ears because of too high ossifications in the membranous labyrinth. In the other 3 ears volume rendering could display the thin cochlea basal turn and the intermittent semicircular canals. In the patients

  13. A Sort-Last Rendering System over an Optical Backplane

    Directory of Open Access Journals (Sweden)

    Yasuhiro Kirihata

    2005-06-01

    Full Text Available Sort-Last is a computer graphics technique for rendering extremely large data sets on clusters of computers. Sort-Last works by dividing the data set into even-sized chunks for parallel rendering and then composing the images to form the final result. Since sort-last rendering requires the movement of large amounts of image data among cluster nodes, the network interconnecting the nodes becomes a major bottleneck. In this paper, we describe a sort-last rendering system implemented on a cluster of computers whose nodes are connected by an all-optical switch. The rendering system introduces the notion of the Photonic Computing Engine, a computing system built dynamically by using the optical switch to create dedicated network connections among cluster nodes. The sort-last volume rendering algorithm was implemented on the Photonic Computing Engine, and its performance is evaluated. Prelimi- nary experiments show that performance is affected by the image composition time and average payload size. In an attempt to stabilize the performance of the system, we have designed a flow control mechanism that uses feedback messages to dynamically adjust the data flow rate within the computing engine.

  14. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  15. Volumetric ambient occlusion for real-time rendering and games.

    Science.gov (United States)

    Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M

    2010-01-01

    This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.

  16. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    Science.gov (United States)

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  17. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering

    International Nuclear Information System (INIS)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L.

    2006-01-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [de

  18. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering; Die computerassistierte Operationsplanung in der Abdominalchirurgie des Kindes. 3D-Visualisierung mittels ''volume rendering'' in der MRT

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L. [Universitaetsklinikum Heidelberg (Germany). Kinderchirurgie; Troeger, J. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Schenk, J.P. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Universitaetsklinikum, Paediatrische Radiologie, Heidelberg (Germany)

    2006-08-15

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [German] Komplexe Operationen bei ausgepraegten pathologischen Veraenderungen anatomischer Strukturen des kindlichen Abdomens benoetigen eine exakte Operationsvorbereitung. 3D-Visualisierung und computerassistierte Operationsplanung anhand von CT-Daten finden fuer schwierige chirurgische Eingriffe bei Erwachsenen in zunehmendem Masse Anwendung. Aus strahlenhygienischen Gruenden und bei besserer Weichteildifferenzierung ist jedoch neben der Sonographie die Magnetresonanztomographie (MRT) bei Kindern das Diagnostikum der Wahl. Die 3D-Visualisierung dieser MRT-Daten ist dabei jedoch aufgrund vielfaeltiger Schwierigkeiten bisher nicht durchgefuehrt worden, obwohl sich das Gebiet embryonaler Fehlbildungen und Tumoren geradezu anbietet. Vorgestellt wird eine weiterentwickelte und an die Fragestellungen der abdominellen Kinderchirurgie angepasste, sehr leistungsstarke raycastingbasierte 3D-volume-rendering-Software (VG Studio Max 1

  19. Volume rendering based on magnetic resonance imaging: advances in understanding the three-dimensional anatomy of the human knee

    Science.gov (United States)

    Anastasi, Giuseppe; Bramanti, Placido; Di Bella, Paolo; Favaloro, Angelo; Trimarchi, Fabio; Magaudda, Ludovico; Gaeta, Michele; Scribano, Emanuele; Bruschetta, Daniele; Milardi, Demetrio

    2007-01-01

    The choice of medical imaging techniques, for the purpose of the present work aimed at studying the anatomy of the knee, derives from the increasing use of images in diagnostics, research and teaching, and the subsequent importance that these methods are gaining within the scientific community. Medical systems using virtual reality techniques also offer a good alternative to traditional methods, and are considered among the most important tools in the areas of research and teaching. In our work we have shown some possible uses of three-dimensional imaging for the study of the morphology of the normal human knee, and its clinical applications. We used the direct volume rendering technique, and created a data set of images and animations to allow us to visualize the single structures of the human knee in three dimensions. Direct volume rendering makes use of specific algorithms to transform conventional two-dimensional magnetic resonance imaging sets of slices into see-through volume data set images. It is a technique which does not require the construction of intermediate geometric representations, and has the advantage of allowing the visualization of a single image of the full data set, using semi-transparent mapping. Digital images of human structures, and in particular of the knee, offer important information about anatomical structures and their relationships, and are of great value in the planning of surgical procedures. On this basis we studied seven volunteers with an average age of 25 years, who underwent magnetic resonance imaging. After elaboration of the data through post-processing, we analysed the structure of the knee in detail. The aim of our investigation was the three-dimensional image, in order to comprehend better the interactions between anatomical structures. We believe that these results, applied to living subjects, widen the frontiers in the areas of teaching, diagnostics, therapy and scientific research. PMID:17645453

  20. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  1. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  2. Usefulness of PC based 3D volume rendering technique in the evaluation of suspected aneurysm on brain MRA

    International Nuclear Information System (INIS)

    Baek, Seung Il; Lee, Ghi Jai; Shim, Jae Chan; Bang, Sun Woo; Ryu, Seok Jong; Kim, Ho Kyun

    2002-01-01

    To evaluated usefulness of volume rending technique using 3D visualization software on PC in patients with suspected intracranial aneurysm on brain MRA. We analyzed prospectively 21 patients with suspected aneurysms on the routine MIP images which were obtained 15 .deg. C increment along axial and sagittal plane, among 135 patients in whom brain MRA was done due to stroke symptoms for recent 5 months. The locations were the anterior communicating artery (A-com) in 8 patients, the posterior communicating artery (P-com) in 3, the ICA bifurcation in 5, the MCA bifurcation in 4, and the basilar tip in one. Male to female ratio was 14:7 and mean age was 62 years. MRA source images were sent to PC through LAN, and the existence of aneurysm was evaluated with volume rendering technique using 3D visualization software on PC. The presence or absence of aneurysm on MIP and volume rendering images was decided by the consensus of two radiologists. We found the aneurysms with volume rendering technique, from 1 patient among 8 patients with suspected aneurysm at A-com and also 1 patient among 3 patients with suspected aneurysm at P=com on routine MIP images. Confirmative angiography and interventional procedures were done in these 2 patients. The causes for mimicking the aneurysm on MIP were flow displacement artifact in 9, normal P-com infundibulum in 2, and overlapped or narrowed vessels in 8 patients, and among them confirmative angiography was done in 2 patient. Volume rendering technique using visualization software on PC is useful to scrutinize the suspected aneurysm on routine MIP images and to avoid further invasive angiography

  3. Rendering of Gemstones

    OpenAIRE

    Krtek, Lukáš

    2012-01-01

    The distinctive appearance of gemstones is caused by the way light reflects and refracts multiple times inside of them. The goal of this thesis is to design and implement an application for photorealistic rendering of gems. The most important effects we aim for are realistic dispersion of light and refractive caustics. For rendering we use well-known algorithm of path tracing with an experimental modification for faster computation of caustic effects. In this thesis we also design and impleme...

  4. Clinical application of three-dimensional spiral CT cerebral angiography with volume rendering

    International Nuclear Information System (INIS)

    Duan Shaoyin; Huang Xi'en; Kang Jianghe; Zhang Dantong; Lin Qingchi; Cai Guoxiang; Xu Meixin; Pang Ruilin

    2002-01-01

    Objective: To study the methodology and assess the clinical value of three-dimensional CT angiography (3D-CTA) with volume rendering (VR) in cerebral vessels. Methods: Sixty-two patients were examined by means of 3D-CTA with volume rendering. VR was used in the reconstruction of 3D images, and the demonstration of normal vessels and vascular lesions were particularly analyzed. At the same time, comparisons were made between the images of VR and SSD, MIP, and also between the diagnosis of VR-CTA and DSA or postoperative results. Results: In VR images, cerebral vessel routes and vessel cavities were showed clearly, while the relationship among vascular lesions, surrounding vessels, and neighboring structure was distinguished. 50 cases (80.6%) were found positive, 48 of which were correct and 2 were false-positive compared with DSA or postoperative results. The accurate rate of diagnosis was 96.0%. There was no obvious difference in showing the cerebral vessel among the images of VR, SSD and MIP (P > 0.25). Conclusion: Three-dimensional CT cerebral angiography with VR is a new noninvasive effective method. It can even partly replace the DSA. The 3D-images have the characteristics of showing the cerebral vascular cavity and overlapped vessels without cutting the skull

  5. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    Science.gov (United States)

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Volume-Rendered 3D Display Of MR Angiograms in the Diagnosis of Cerebral Arteriovenous Malformations

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchiya, K.; Katase, S.; Hachiya, J. [Kyorin Univ. School of Medicine, Tokyo (Japan). Dept. of Radiology; Shiokawa, Y. [Kyorin Univ. School of Medicine, Tokyo (Japan). Dept. of Neurosurgery

    2003-11-01

    Purpose: To determine whether application of a volume-rendered display of 3D time-of-flight (TOF) MR angiography could assist the diagnosis of cerebral arteriovenous malformations (AVMs). Material and Methods: Volume-rendered 3D images of postcontrast 3D time-of-flight MR angiography were compared with conventional angiograms in 12 patients. The correlation between the 3D images and the operative findings was also analyzed in 5 patients. Results: The 3D-displayed images showed all of the feeders and drainers in 10 and 9 patients, respectively. In all patients, the nidus was three-dimensionally visualized. In 3 patients with hematomas, the relationship between the hematoma and the AVM was well demonstrated. The 3D images corresponded well with the operative findings in the 5 patients. Conclusion: This method is of help in assessing the relationship between the components of an AVM as well as that between an AVM and an associated hematoma.

  7. Volume-Rendered 3D Display Of MR Angiograms in the Diagnosis of Cerebral Arteriovenous Malformations

    International Nuclear Information System (INIS)

    Tsuchiya, K.; Katase, S.; Hachiya, J.; Shiokawa, Y.

    2003-01-01

    Purpose: To determine whether application of a volume-rendered display of 3D time-of-flight (TOF) MR angiography could assist the diagnosis of cerebral arteriovenous malformations (AVMs). Material and Methods: Volume-rendered 3D images of postcontrast 3D time-of-flight MR angiography were compared with conventional angiograms in 12 patients. The correlation between the 3D images and the operative findings was also analyzed in 5 patients. Results: The 3D-displayed images showed all of the feeders and drainers in 10 and 9 patients, respectively. In all patients, the nidus was three-dimensionally visualized. In 3 patients with hematomas, the relationship between the hematoma and the AVM was well demonstrated. The 3D images corresponded well with the operative findings in the 5 patients. Conclusion: This method is of help in assessing the relationship between the components of an AVM as well as that between an AVM and an associated hematoma

  8. Evaluation of the relationship between extremity soft tissue sarcomas and adjacent major vessels using contrast-enhanced multidetector CT and three-dimensional volume-rendered CT angiography - A preliminary study

    International Nuclear Information System (INIS)

    Li, YangKang; Lin, JianBang; Cai, AiQun; Zhou, XiuGuo; Zheng, Yu; Wei, XiaoLong; Cheng, Ying; Liu, GuoRui

    2013-01-01

    Background: Accurate description of the relationship between extremity soft tissue sarcoma and the adjacent major vessels is crucial for successful surgery. In addition to magnetic resonance imaging (MRI) or in patients who cannot undergo MRI, two-dimensional (2D) postcontrast computed tomography (CT) images and three-dimensional (3D) volume-rendered CT angiography may be valuable alternative imaging techniques for preoperative evaluation of extremity sarcomas. Purpose: To preoperatively assess extremity sarcomas using multidetector CT (MDCT), with emphasis on postcontrast MDCT images and 3D volume-rendered MDCT angiography in evaluating the relationship between tumors and adjacent major vessels. Material and Methods: MDCT examinations were performed on 13 patients with non-metastatic extremity sarcomas. Conventional CT images and 3D volume-rendered CT angiography were evaluated, with focus on the relationship between tumors and adjacent major vessels. Kappa consistency statistics were performed with surgery serving as the reference standard. Results: The relationship between sarcomas and adjacent vessels was described as one of three patterns: proximity, adhesion, and encasement. Proximity was seen in five cases on postcontrast CT images or in eight cases on volume-rendered images. Adhesion was seen in three cases on both postcontrast CT images and volume-rendered images. Encasement was seen in five cases on postcontrast CT images or in two cases on volume-rendered images. Compared to surgical results, postcontrast CT images had 100% sensitivity, 83.3% specificity, 87.5% positive predictive value, 100% negative predictive value, and 92.3% accuracy in the detection of vascular invasion (κ = 0.843, P = 0.002). 3D volume-rendered CT angiography had 71.4% sensitivity, 100% specificity, 100% positive predictive value, 75% negative predictive value, and 84.6% accuracy in the detection of vascular invasion (κ = 0.698, P = 0.008). On volume-rendered images, all cases

  9. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.

    2009-11-01

    Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions. © 2009 IEEE.

  10. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    Science.gov (United States)

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  11. State of the Art in Transfer Functions for Direct Volume Rendering

    KAUST Repository

    Ljung, Patric; Krü ger, Jens; Groller, Eduard; Hadwiger, Markus; Hansen, Charles D.; Ynnerman, Anders

    2016-01-01

    A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  12. State of the Art in Transfer Functions for Direct Volume Rendering

    KAUST Repository

    Ljung, Patric

    2016-07-04

    A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  13. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  14. Role of volume rendered 3-D computed tomography in conservative management of trauma-related thoracic injuries.

    LENUS (Irish Health Repository)

    OʼLeary, Donal Peter

    2012-09-01

    Pneumatic nail guns are a tool used commonly in the construction industry and are widely available. Accidental injuries from nail guns are common, and several cases of suicide using a nail gun have been reported. Computed tomographic (CT) imaging, together with echocardiography, has been shown to be the gold standard for investigation of these cases. We present a case of a 55-year-old man who presented to the accident and emergency unit of a community hospital following an accidental pneumatic nail gun injury to his thorax. Volume-rendered CT of the thorax allowed an accurate assessment of the thoracic injuries sustained by this patient. As there was no evidence of any acute life-threatening injury, a sternotomy was avoided and the patient was observed closely until discharge. In conclusion, volume-rendered 3-dimensional CT can greatly help in the decision to avoid an unnecessary sternotomy in patients with a thoracic nail gun injury.

  15. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2002-01-01

    The author introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The author emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  16. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2001-01-01

    This paper introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The paper emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  17. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    Science.gov (United States)

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  18. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  19. Pulmonary nodules: sensitivity of maximum intensity projection versus that of volume rendering of 3D multidetector CT data

    NARCIS (Netherlands)

    Peloschek, Philipp; Sailer, Johannes; Weber, Michael; Herold, Christian J.; Prokop, Mathias; Schaefer-Prokop, Cornelia

    2007-01-01

    PURPOSE: To prospectively compare maximum intensity projection (MIP) and volume rendering (VR) of multidetector computed tomographic (CT) data for the detection of small intrapulmonary nodules. MATERIALS AND METHODS: This institutional review board-approved prospective study included 20 oncology

  20. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  1. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  2. Freely-available, true-color volume rendering software and cryohistology data sets for virtual exploration of the temporal bone anatomy.

    Science.gov (United States)

    Kahrs, Lüder Alexander; Labadie, Robert Frederick

    2013-01-01

    Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.

  3. Three dimensional volume rendering virtual endoscopy of the ossicles using a multi-row detector CT: applications and limitations

    International Nuclear Information System (INIS)

    Kim, Su Yeon; Choi, Sun Seob; Kang, Myung Jin; Shin, Tae Beom; Lee, Ki Nam; Kang, Myung Koo

    2005-01-01

    This study was conducted to know the applications and limitations of three dimensional volume rendering virtual endoscopy of the ossicles using a multi-row detector CT. This study examined 25 patients who underwent temporal bone CT using a 16-row detector CT as a result of hearing problems or trauma. The axial CT scan of the temporal bone was performed with a 0.6 mm collimation, and a reconstruction was carried out with a U70u sharp of kernel value, a 1 mm thickness and 0.5-1.0 mm increments. After observing the ossicles in the axial and coronal images, virtual endoscopy was performed using a three dimensional volume rendering technique with a threshold value of-500 HU. The intra-operative otoendoscopy was performed in 12 ears, and was compared with the virtual endoscopy findings. Virtual endoscopy of the 29 ears without hearing problems demonstrated hypoplastic or an incomplete depiction of the stapes superstructures in 25 ears and a normal depiction in 4 ears. Virtual endoscopy of 21 ears with hearing problems demonstrated no ossicles in 1 ears, no malleus in 3 ears, a malleoincudal subluxation in 6 ears, a dysplastic incus in 5 ears, an incudostapedial subluxation in 9 ears, dysplastic stapes in 2 ears, a hypoplastic or incomplete depiction of the stapes in 16 ears and no stapes in 1 ears. In contrast to the intra-operative otoendoscopy, 8 out of 12 ears showed a hypoplastic or deformed stapes in the virtual endoscopy. Volume rendering virtual endoscopy using a multi-row detector CT is an excellent method for evaluation the ossicles in three dimension, even thought the partial volume effect for the stapes superstructures needs to be considered

  4. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    Science.gov (United States)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  5. Distributed rendering for multiview parallax displays

    Science.gov (United States)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  6. MRI of the labyrinth with volume rendering for cochlear implants candidates

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Harada, Kuniaki; Shirase, Ryuji; Suzuki, Junpei; Nagahama, Hiroshi

    2009-01-01

    We demonstrated three-dimensional models of the labyrinth by volume rendering (VR) in preoperative assessment for cochlear implantation. MRI data sets were acquired in selected subjects using three-dimensional-fast spin echo sequences (3D-FSE). We produced the three-dimensional models of the labyrinth from axial heavily T2-weighted images. The three-dimensional models distinguished the scala tympani and scala vestibuli and provided multidirectional images. The optimal threshold three-dimensional models clearly showed the focal region of signal loss in the cochlear turns (47.1%) and the presence of inner ear anomalies (17.3%) in our series of patients. This study was concluded that these three-dimensional models by VR provide the oto-surgeon with precise, detailed, and easily interpreted information about the cochlear turns for cochlear implants candidates. (author)

  7. Real-time interactive three-dimensional display of CT and MR imaging volume data

    International Nuclear Information System (INIS)

    Yla-Jaaski, J.; Kubler, O.; Kikinis, R.

    1987-01-01

    Real-time reconstruction of surfaces from CT and MR imaging volume data is demonstrated using a new algorithm and implementation in a parallel computer system. The display algorithm accepts noncubic 16-bit voxels directly as input. Operations such as interpolation, classification by thresholding, depth coding, simple lighting effects, and removal of parts of the volume by clipping planes are all supported on-line. An eight-processor implementation of the algorithm renders surfaces from typical CT data sets in real time to allow interactive rotation of the volume

  8. Use of multidetector row CT with volume renderings in right lobe living liver transplantation

    International Nuclear Information System (INIS)

    Ishifuro, Minoru; Akiyama, Yuji; Kushima, Toshio; Horiguchi, Jun; Nakashige, Aya; Tamura, Akihisa; Marukawa, Kazushi; Fukuda, Hiroshi; Ono, Chiaki; Ito, Katsuhide

    2002-01-01

    Multidetector row CT is a feasible diagnostic tool in pre- and postoperative liver partial transplantation. We can assess vascular anatomy and liver parenchyma as well as volumetry, which provide useful information for both donor selection and surgical planning. Disorders of the vascular and biliary systems are carefully observed in recipients. In addition, we evaluate liver regeneration of both the donor and the recipient by serial volumetry. We present how multidetector row CT with state-of-the-art three-dimensional volume renderings may be used in right lobe liver transplantation. (orig.)

  9. Developing a Tile-Based Rendering Method to Improve Rendering Speed of 3D Geospatial Data with HTML5 and WebGL

    Directory of Open Access Journals (Sweden)

    Seokchan Kang

    2017-01-01

    Full Text Available A dedicated plug-in has been installed to visualize three-dimensional (3D city modeling spatial data in web-based applications. However, plug-in methods are gradually becoming obsolete, owing to their limited performance with respect to installation errors, unsupported cross-browsers, and security vulnerability. Particularly, in 2015, the NPAPI service was terminated in most existing web browsers except Internet Explorer. To overcome these problems, the HTML5/WebGL (next-generation web standard, confirmed in October 2014 technology emerged. In particular, WebGL is able to display 3D spatial data without plug-ins in browsers. In this study, we attempted to identify the requirements and limitations of displaying 3D city modeling spatial data using HTML5/WebGL, and we propose alternative ways based on the bin-packing algorithm that aggregates individual 3D city modeling data including buildings in tile units. The proposed method reduces the operational complexity and the number and volume of transmissions required for rendering processing to improve the speed of 3D data rendering. The proposed method was validated on real data for evaluating its effectiveness in 3D visualization of city modeling data in web-based applications.

  10. Enhancement method for rendered images of home decoration based on SLIC superpixels

    Science.gov (United States)

    Dai, Yutong; Jiang, Xiaotong

    2018-04-01

    Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.

  11. Second-order accurate volume-of-fluid algorithms for tracking material interfaces

    International Nuclear Information System (INIS)

    Pilliod, James Edward; Puckett, Elbridge Gerry

    2004-01-01

    We introduce two new volume-of-fluid interface reconstruction algorithms and compare the accuracy of these algorithms to four other widely used volume-of-fluid interface reconstruction algorithms. We find that when the interface is smooth (e.g., continuous with two continuous derivatives) the new methods are second-order accurate and the other algorithms are first-order accurate. We propose a design criteria for a volume-of-fluid interface reconstruction algorithm to be second-order accurate. Namely, that it reproduce lines in two space dimensions or planes in three space dimensions exactly. We also introduce a second-order, unsplit, volume-of-fluid advection algorithm that is based on a second-order, finite difference method for scalar conservation laws due to Bell, Dawson and Shubin. We test this advection algorithm by modeling several different interface shapes propagating in two simple incompressible flows and compare the results with the standard second-order, operator-split advection algorithm. Although both methods are second-order accurate when the interface is smooth, we find that the unsplit algorithm exhibits noticeably better resolution in regions where the interface has discontinuous derivatives, such as at corners

  12. Integral image rendering procedure for aberration correction and size measurement.

    Science.gov (United States)

    Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion

    2014-05-20

    The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.

  13. Real-time photorealistic stereoscopic rendering of fire

    Science.gov (United States)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  14. Democratizing rendering for multiple viewers in surround VR systems

    KAUST Repository

    Schulze, Jü rgen P.; Acevedo-Feliz, Daniel; Mangan, John; Prudhomme, Andrew; Nguyen, Phi Khanh; Weber, Philip P.

    2012-01-01

    We present a new approach for how multiple users' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users' head positions and orientations. © 2012 IEEE.

  15. Democratizing rendering for multiple viewers in surround VR systems

    KAUST Repository

    Schulze, Jürgen P.

    2012-03-01

    We present a new approach for how multiple users\\' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users\\' head positions and orientations. © 2012 IEEE.

  16. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  17. Clustered deep shadow maps for integrated polyhedral and volume rendering

    KAUST Repository

    Bornik, Alexander

    2012-01-01

    This paper presents a hardware-accelerated approach for shadow computation in scenes containing both complex volumetric objects and polyhedral models. Our system is the first hardware accelerated complete implementation of deep shadow maps, which unifies the computation of volumetric and geometric shadows. Up to now such unified computation was limited to software-only rendering . Previous hardware accelerated techniques can handle only geometric or only volumetric scenes - both resulting in the loss of important properties of the original concept. Our approach supports interactive rendering of polyhedrally bounded volumetric objects on the GPU based on ray casting. The ray casting can be conveniently used for both the shadow map computation and the rendering. We show how anti-aliased high-quality shadows are feasible in scenes composed of multiple overlapping translucent objects, and how sparse scenes can be handled efficiently using clustered deep shadow maps. © 2012 Springer-Verlag.

  18. Evaluation of obstructive airway lesions in complex congenital heart disease using composite volume-rendered images from multislice CT

    International Nuclear Information System (INIS)

    Choo, Ki Seok; Kim, Chang Won; Lee, Tae Hong; Kim, Suk; Kim, Kun Il; Lee, Hyoung Doo; Ban, Ji Eun; Sung, Si Chan; Chang, Yun Hee

    2006-01-01

    Multislice CT (MSCT) allows high-quality volume-rendered (VR) and composite volume-rendered images. To investigate the clinical usefulness of composite VR images in the evaluation of the relationship between cardiovascular structures and the airway in children with complex congenital heart disease (CHD). Four- or 16-slice MSCT scanning was performed consecutively in 77 children (mean age 6.4 months) with CHD and respiratory symptoms, a chest radiographic abnormality, or abnormal course of the pulmonary artery on ECHO. MSCT scanning was performed during breathing or after sedation. Contrast medium (2 ml/kg) was administered through a pedal venous route or arm vein in all patients. The VR technique was used to reconstruct the cardiovascular structures and airway, and then both VR images were composed using the commercial software (VoxelPlus 2 ; Daejeon, Korea). Stenoses were seen in the trachea in 1 patient and in the bronchi in 14 patients (19%). Other patients with complex CHD did not have significant airway stenoses. Composite VR images with MSCT can provide more exact airway images in relationship to the surrounding cardiovascular structures and thus help in optimizing management strategies in treating CHD. (orig.)

  19. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.; Martin, T.; Grosset, A. V. P.; Smith, S. T.; Hansen, C. D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  20. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  1. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Fink, C. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Abteilung Onkologische Diagnostik und Therapie, Forschungsschwerpunkt Radiologische Diagnostik und Therapie, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Ott, R.C.; Wiesel, M. [Abteilung Urologie und Poliklinik, Chirurgische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)

    2003-04-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  2. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    International Nuclear Information System (INIS)

    Fink, C.; Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M.; Ott, R.C.; Wiesel, M.

    2003-01-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  3. Visualization of plasma collision phenomenon by particle based rendering

    International Nuclear Information System (INIS)

    Yamamoto, Takeshi; Takagishi, Hironori; Hasegawa, Kyoko; Nakata, Susumu; Tanaka, Satoshi; Tanaka, Kazuo

    2012-01-01

    In this paper, we visualize plasma collision phenomenon based on XYT-space (space and time) volume data for supporting research in plasma physics. We create 3D volume data in the XYT-space by piling up a time series of XY-plane photo images taken in experiment. As a result, we can visualize as one still image all the time behavior of the plasma plume. Besides, we adopt 'fused' visualization based on particle based rendering technique. Using that technique, we can easily fuse volume rendering different materials, and compare physics of different elements in flexible ways. In addition, we propose the method to generate pseudo-3D images from pictures shoot by ICCD of two perspectives on the upper and side. (author)

  4. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  5. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  6. Free-viewpoint depth image based rendering

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; With, de P.H.N.

    2010-01-01

    In 3D TV research, one approach is to employ multiple cameras for creating a 3D multi-view signal with the aim to make interactive free-viewpoint selection possible in 3D TV media. This paper explores a new rendering algorithm that enables to compute a free-viewpoint between two reference views from

  7. Realistic Real-Time Outdoor Rendering in Augmented Reality

    Science.gov (United States)

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  8. Realistic real-time outdoor rendering in augmented reality.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available Realistic rendering techniques of outdoor Augmented Reality (AR has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps. Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  9. Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.

    Science.gov (United States)

    Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho

    2011-01-01

    The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of 98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. A kinesthetic washout filter for force-feedback rendering.

    Science.gov (United States)

    Danieau, Fabien; Lecuyer, Anatole; Guillotel, Philippe; Fleureau, Julien; Mollet, Nicolas; Christie, Marc

    2015-01-01

    Today haptic feedback can be designed and associated to audiovisual content (haptic-audiovisuals or HAV). Although there are multiple means to create individual haptic effects, the issue of how to properly adapt such effects on force-feedback devices has not been addressed and is mostly a manual endeavor. We propose a new approach for the haptic rendering of HAV, based on a washout filter for force-feedback devices. A body model and an inverse kinematics algorithm simulate the user's kinesthetic perception. Then, the haptic rendering is adapted in order to handle transitions between haptic effects and to optimize the amplitude of effects regarding the device capabilities. Results of a user study show that this new haptic rendering can successfully improve the HAV experience.

  11. Three-dimensional volume rendering of tibiofibular joint space and quantitative analysis of change in volume due to tibiofibular syndesmosis diastases

    International Nuclear Information System (INIS)

    Taser, F.; Shafiq, Q.; Ebraheim, N.A.

    2006-01-01

    The diagnosis of ankle syndesmosis injuries is made by various imaging techniques. The present study was undertaken to examine whether the three-dimensional reconstruction of axial CT images and calculation of the volume of tibiofibular joint space enhances the sensitivity of diastases diagnoses or not. Six adult cadaveric ankle specimens were used for spiral CT-scan assessment of tibiofibular syndesmosis. After the specimens were dissected, external fixation was performed and diastases of 1, 2, and 3 mm was simulated by a precalibrated device. Helical CT scans were obtained with 1.0-mm slice thickness. The data was transferred to the computer software AcquariusNET. Then the contours of the tibiofibular syndesmosis joint space were outlined on each axial CT slice and the collection of these slices were stacked using the computer software AutoCAD 2005, according to the spatial arrangement and geometrical coordinates between each slice, to produce a three-dimensional reconstruction of the joint space. The area of each slice and the volume of the entire tibiofibular joint space were calculated. The tibiofibular joint space at the 10th-mm slice level was also measured on axial CT scan images at normal, 1, 2 and 3-mm joint space diastases. The three-dimensional volume-rendering of the tibiofibular syndesmosis joint space from the spiral CT data demonstrated the shape of the joint space and has been found to be a sensitive method for calculating joint space volume. We found that, from normal to 1 mm, a 1-mm diastasis increases approximately 43% of the joint space volume, while from 1 to 3 mm, there is about a 20% increase for each 1-mm increase. Volume calculation using this method can be performed in cases of syndesmotic instability after ankle injuries and for preoperative and postoperative evaluation of the integrity of the tibiofibular syndesmosis. (orig.)

  12. 3D reconstruction from X-ray fluoroscopy for clinical veterinary medicine using differential volume rendering

    International Nuclear Information System (INIS)

    Khongsomboon, K.; Hamamoto, Kazuhiko; Kondo, Shozo

    2007-01-01

    3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the technique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians. (author)

  13. Signal Processing Implementation and Comparison of Automotive Spatial Sound Rendering Strategies

    Directory of Open Access Journals (Sweden)

    Bai MingsianR

    2009-01-01

    Full Text Available Design and implementation strategies of spatial sound rendering are investigated in this paper for automotive scenarios. Six design methods are implemented for various rendering modes with different number of passengers. Specifically, the downmixing algorithms aimed at balancing the front and back reproductions are developed for the 5.1-channel input. Other five algorithms based on inverse filtering are implemented in two approaches. The first approach utilizes binaural (Head-Related Transfer Functions HRTFs measured in the car interior, whereas the second approach named the point-receiver model targets a point receiver positioned at the center of the passenger's head. The proposed processing algorithms were compared via objective and subjective experiments under various listening conditions. Test data were processed by the multivariate analysis of variance (MANOVA method and the least significant difference (Fisher's LSD method as a post hoc test to justify the statistical significance of the experimental data. The results indicate that inverse filtering algorithms are preferred for the single passenger mode. For the multipassenger mode, however, downmixing algorithms generally outperformed the other processing techniques.

  14. Three-dimensional reconstructions of the orbital floor by volume-rendering of multidetector-row CT data

    International Nuclear Information System (INIS)

    Yoshikawa, Tetsuya; Miyajima, Akira; Fujita, Yuko; Yamada, Kazuo

    2011-01-01

    The advent of 3D-CT has made the evaluation of complicated facial fractures much easier than before. However, its use in injuries involving the orbital floor has been limited by the difficulty of visualizing the thin bony structures given artifacts caused by the partial volume effect. Nevertheless, high-technology machines such as multidetector-row CT (MDCT) and new-generation software have improved the quality of 3D imaging, and this paper describes a procedure for obtaining better visualization of the orbital floor using a MDCT scanner. Forty trauma cases were subject to MDCT: 13 with injury to the orbital floor, and 27 without. All scans were performed in the standard manner, at slice thicknesses of 0.5 mm. 3D-CT images were created overlooking the orbital floor including soft tissue to minimize the pseudo-foramen artifacts produced through volume rendering. Bone deficits, fracture lines, and grafted bone were visible in the 3D images, and visualization was supported by the ready creation of stereoscopic images from MDCT volume data. Measurement of the pseudo-foramen revealed approximately half the artifacts to be less than 5 mm in diameter, suggesting practicality of this method without subjecting the patient to undue increases in radiation exposure in the treatment of cases involving injury to the orbital floor. (author)

  15. An improved method of continuous LOD based on fractal theory in terrain rendering

    Science.gov (United States)

    Lin, Lan; Li, Lijun

    2007-11-01

    With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.

  16. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    Science.gov (United States)

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  17. The effect of depth compression on multiview rendering quality

    NARCIS (Netherlands)

    Merkle, P.; Morvan, Y.; Smolic, A.; Farin, D.S.; Mueller, K..; With, de P.H.N.; Wiegand, T.

    2010-01-01

    This paper presents a comparative study on different techniques for depth-image compression and its implications on the quality of multiview video plus depth virtual view rendering. A novel coding algorithm for depth images that concentrates on their special characteristics, namely smooth regions

  18. Virtual endoscopy and 3D volume rendering in the management of frontal sinus fractures.

    Science.gov (United States)

    Belina, Stanko; Cuk, Viseslav; Klapan, Ivica

    2009-12-01

    Frontal sinus fractures (FSF) are commonly caused by traffic accidents, assaults, industrial accidents and gunshot wounds. Classical roentgenography has high proportion of false negative findings in cases of FSF and is not particularly useful in examining the severity of damage to the frontal sinus posterior table and the nasofrontal duct region. High resolution computed tomography was inavoidable during the management of such patients but it may produce large quantity of 2D images. Postprocessing of datasets acquired by high resolution computer tomography from patients with severe head trauma may offer a valuable additional help in diagnostics and surgery planning. We performed virtual endoscopy (VE) and 3D volume rendering (3DVR) on high resolution CT data acquired from a 54-year-old man with with both anterior and posterior frontal sinus wall fracture in order to demonstrate advantages and disadvantages of these methods. Data acquisition was done by Siemens Somatom Emotion scanner and postprocessing was performed with Syngo 2006G software. VE and 3DVR were performed in a man who suffered blunt trauma to his forehead and nose in an traffic accident. Left frontal sinus anterior wall fracture without dislocation and fracture of tabula interna with dislocation were found. 3D position and orientation of fracture lines were shown in by 3D rendering software. We concluded that VE and 3DVR can clearly display the anatomic structure of the paranasal sinuses and nasopharyngeal cavity, revealing damage to the sinus wall caused by a fracture and its relationship to surrounding anatomical structures.

  19. New reconstruction algorithm in helical-volume CT

    International Nuclear Information System (INIS)

    Toki, Y.; Rifu, T.; Aradate, H.; Hirao, Y.; Ohyama, N.

    1990-01-01

    This paper reports on helical scanning that is an application of continuous scanning CT to acquire volume data in a short time for three-dimensional study. In a helical scan, the patient couch sustains movement during continuous-rotation scanning and then the acquired data is processed to synthesize a projection data set of vertical section by interpolation. But the synthesized section is not thin enough; also, the image may have artifacts caused by couch movement. A new reconstruction algorithm that helps resolve such problems has been developed and compared with the ordinary algorithm. The authors constructed a helical scan system based on TCT-900S, which can perform 1-second rotation continuously for 30 seconds. The authors measured section thickness using both algorithms on an AAPM phantom, and we also compared degree of artifacts on clinical data

  20. FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis.

    Science.gov (United States)

    Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles

    2017-05-26

    Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.

  1. Hybrid fur rendering: combining volumetric fur with explicit hair strands

    DEFF Research Database (Denmark)

    Andersen, Tobias Grønbeck; Falster, Viggo; Frisvad, Jeppe Revall

    2016-01-01

    Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination...... with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly...... defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur...

  2. TransCut: interactive rendering of translucent cutouts.

    Science.gov (United States)

    Li, Dongping; Sun, Xin; Ren, Zhong; Lin, Stephen; Tong, Yiying; Guo, Baining; Zhou, Kun

    2013-03-01

    We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces—all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.

  3. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  4. Drishti: a volume exploration and presentation tool

    Science.gov (United States)

    Limaye, Ajay

    2012-10-01

    Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations.

  5. Volume definition system for treatment planning

    International Nuclear Information System (INIS)

    Alakuijala, Jyrki; Pekkarinen, Ari; Puurunen, Harri

    1997-01-01

    Purpose: Volume definition is a difficult and time consuming task in 3D treatment planning. We have studied a systems approach for constructing an efficient and reliable set of tools for volume definition. Our intent is to automate body outline, air cavities and bone volume definition and accelerate definition of other anatomical structures. An additional focus is on assisting in definition of CTV and PTV. The primary goals of this work are to cut down the time used in contouring and to improve the accuracy of volume definition. Methods: We used the following tool categories: manual, semi-automatic, automatic, structure management, target volume definition, and visualization tools. The manual tools include mouse contouring tools with contour editing possibilities and painting tools with a scaleable circular brush and an intelligent brush. The intelligent brush adapts its shape to CT value boundaries. The semi-automatic tools consist of edge point chaining, classical 3D region growing of single segment and competitive volume growing of multiple segments. We tuned the volume growing function to take into account both local and global region image values, local volume homogeneity, and distance. Heuristic seeding followed with competitive volume growing finds the body outline, couch and air automatically. The structure management tool stores ICD-O coded structures in a database. The codes have predefined volume growing parameters and thus are able to accommodate the volume growing dissimilarity function for different volume types. The target definition tools include elliptical 3D automargin for CTV to PTV transformation and target volume interpolation and extrapolation by distance transform. Both the CTV and the PTV can overlap with anatomical structures. Visualization tools show the volumes as contours or color wash overlaid on an image and displays voxel rendering or translucent triangle mesh rendering in 3D. Results: The competitive volume growing speeds up the

  6. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  7. GPU Pro advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2010-01-01

    This book covers essential tools and techniques for programming the graphics processing unit. Brought to you by Wolfgang Engel and the same team of editors who made the ShaderX series a success, this volume covers advanced rendering techniques, engine design, GPGPU techniques, related mathematical techniques, and game postmortems. A special emphasis is placed on handheld programming to account for the increased importance of graphics on mobile devices, especially the iPhone and iPod touch.Example programs and source code can be downloaded from the book's CRC Press web page. 

  8. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  9. An algorithm to estimate the volume of the thyroid lesions using SPECT

    International Nuclear Information System (INIS)

    Pina, Jorge Luiz Soares de; Mello, Rossana Corbo de; Rebelo, Ana Maria

    2000-01-01

    An algorithm was developed to estimate the volume of the thyroid and its functioning lesions, that is, those which capture iodine. This estimate is achieved by the use of SPECT, Single Photon Emission Computed Tomography. The algorithm was written in an extended PASCAL language subset and was accomplished to run on Siemens ICON System, a special Macintosh environment that controls the tomographic image acquisition and processing. In spite of be developed for the Siemens DIACAN gamma camera, the algorithm can be easily adapted for the ECAN camera. These two Cameras models are among the most common ones used in Nuclear Medicine in Brazil Nowadays. A phantom study was used to validate the algorithm that have shown that a threshold of 42% of maximum pixel intensity of the images it is possible to estimate the volume of the phantoms with an error of 10% in the range of 30 to 70 ml. (author)

  10. Interactive definition of transfer functions in volume rendering based on image markers

    International Nuclear Information System (INIS)

    Teistler, Michael; Nowinski, Wieslaw L.; Breiman, Richard S.; Liong, Sauw Ming; Ho, Liang Yoong; Shahab, Atif

    2007-01-01

    Objectives A user interface for transfer function (TF) definition in volume rendering (VR) was developed that allows the user to intuitively assign color and opacity to the original image intensities. This software may surpass solutions currently deployed in clinical practice by simplifying the use of TFs beyond predefined settings that are not always applicable. Materials and methods The TF definition is usually a cumbersome task that requires the user to manipulate graphical representations of the TF (e.g. trapezoids). A new method that allows the user to place markers at points of interest directly on CT and MRI images or orthogonal reformations was developed based on two-dimensional region growing and a few user-definable marker-related parameters. For each user defined image marker, a segment of the transfer function is computed. The resulting TF can also be applied to the slice image views. Results were judged subjectively. Results Each individualized TF can be defined interactively in a few simple steps. For every user interaction, immediate visual feedback is given. Clinicians who tested the application appreciated being able to directly work on familiar slice images to generate the desired 3D views. Conclusion Interactive TF definition can increase the actual utility of VR, help to understand the role of the TF with its variations, and increase the acceptance of VR as a clinical tool. (orig.)

  11. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  12. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  13. A Volume Clearing Algorithm for Muon Tomography

    OpenAIRE

    Mitra, D.; Day, K.; Hohlmann, M.

    2014-01-01

    The primary objective is to enhance muon-tomographic image reconstruction capability by providing distinctive information in terms of deciding on the properties of regions or voxels within a probed volume "V" during any point of scanning: threat type, non-threat type, or not-sufficient data. An algorithm (MTclear) is being developed to ray-trace muon tracks and count how many straight tracks are passing through a voxel. If a voxel "v" has sufficient number of straight tracks (t), then "v" is ...

  14. RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research.

    Science.gov (United States)

    Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H

    2014-02-07

    RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.

  15. Emotion rendering in auditory simulations of imagined walking styles

    DEFF Research Database (Denmark)

    Turchet, Luca; Rodá, Antonio

    2016-01-01

    This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to imagined walking scenarios, five emotions (aggressive, happy, neutral, sad......, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types. Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal...... distance between consecutive steps. Results were in accordance with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify the emotions...

  16. GPU PRO 3 Advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2012-01-01

    GPU Pro3, the third volume in the GPU Pro book series, offers practical tips and techniques for creating real-time graphics that are useful to beginners and seasoned game and graphics programmers alike. Section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Wessam Bahnassi, and Sebastien St-Laurent have once again brought together a high-quality collection of cutting-edge techniques for advanced GPU programming. With contributions by more than 50 experts, GPU Pro3: Advanced Rendering Techniques covers battle-tested tips and tricks for creating interesting geometry, realistic sha

  17. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    Science.gov (United States)

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  18. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  19. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kolda, Tamara G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Wake Forest Univ., Winston-Salem, MA (United States); Ballard, Grey [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mahoney, Michael [Univ. of California, Berkeley, CA (United States)

    2018-01-01

    Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

  20. Adaptive statistical iterative reconstruction for volume-rendered computed tomography portovenography. Improvement of image quality

    International Nuclear Information System (INIS)

    Matsuda, Izuru; Hanaoka, Shohei; Akahane, Masaaki

    2010-01-01

    Adaptive statistical iterative reconstruction (ASIR) is a reconstruction technique for computed tomography (CT) that reduces image noise. The purpose of our study was to investigate whether ASIR improves the quality of volume-rendered (VR) CT portovenography. Institutional review board approval, with waived consent, was obtained. A total of 19 patients (12 men, 7 women; mean age 69.0 years; range 25-82 years) suspected of having liver lesions underwent three-phase enhanced CT. VR image sets were prepared with both the conventional method and ASIR. The required time to make VR images was recorded. Two radiologists performed independent qualitative evaluations of the image sets. The Wilcoxon signed-rank test was used for statistical analysis. Contrast-noise ratios (CNRs) of the portal and hepatic vein were also evaluated. Overall image quality was significantly improved by ASIR (P<0.0001 and P=0.0155 for each radiologist). ASIR enhanced CNRs of the portal and hepatic vein significantly (P<0.0001). The time required to create VR images was significantly shorter with ASIR (84.7 vs. 117.1 s; P=0.014). ASIR enhances CNRs and improves image quality in VR CT portovenography. It also shortens the time required to create liver VR CT portovenographs. (author)

  1. Time varying, multivariate volume data reduction

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fout, Nathaniel [UC DAVIS; Ma, Kwan - Liu [UC DAVIS

    2010-01-01

    Large-scale supercomputing is revolutionizing the way science is conducted. A growing challenge, however, is understanding the massive quantities of data produced by large-scale simulations. The data, typically time-varying, multivariate, and volumetric, can occupy from hundreds of gigabytes to several terabytes of storage space. Transferring and processing volume data of such sizes is prohibitively expensive and resource intensive. Although it may not be possible to entirely alleviate these problems, data compression should be considered as part of a viable solution, especially when the primary means of data analysis is volume rendering. In this paper we present our study of multivariate compression, which exploits correlations among related variables, for volume rendering. Two configurations for multidimensional compression based on vector quantization are examined. We emphasize quality reconstruction and interactive rendering, which leads us to a solution using graphics hardware to perform on-the-fly decompression during rendering. In this paper we present a solution which addresses the need for data reduction in large supercomputing environments where data resulting from simulations occupies tremendous amounts of storage. Our solution employs a lossy encoding scheme to acrueve data reduction with several options in terms of rate-distortion behavior. We focus on encoding of multiple variables together, with optional compression in space and time. The compressed volumes can be rendered directly with commodity graphics cards at interactive frame rates and rendering quality similar to that of static volume renderers. Compression results using a multivariate time-varying data set indicate that encoding multiple variables results in acceptable performance in the case of spatial and temporal encoding as compared to independent compression of variables. The relative performance of spatial vs. temporal compression is data dependent, although temporal compression has the

  2. Pyrite: A blender plugin for visualizing molecular dynamics simulations using industry-standard rendering techniques.

    Science.gov (United States)

    Rajendiran, Nivedita; Durrant, Jacob D

    2018-05-05

    Molecular dynamics (MD) simulations provide critical insights into many biological mechanisms. Programs such as VMD, Chimera, and PyMOL can produce impressive simulation visualizations, but they lack many advanced rendering algorithms common in the film and video-game industries. In contrast, the modeling program Blender includes such algorithms but cannot import MD-simulation data. MD trajectories often require many gigabytes of memory/disk space, complicating Blender import. We present Pyrite, a Blender plugin that overcomes these limitations. Pyrite allows researchers to visualize MD simulations within Blender, with full access to Blender's cutting-edge rendering techniques. We expect Pyrite-generated images to appeal to students and non-specialists alike. A copy of the plugin is available at http://durrantlab.com/pyrite/, released under the terms of the GNU General Public License Version 3. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  4. Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.

    Science.gov (United States)

    Holub, Joseph; Winer, Eliot

    2017-12-01

    Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

  5. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  6. Conservation of old renderings - the consolidation of rendering with loss of cohesion

    Directory of Open Access Journals (Sweden)

    Martha Tavares

    2008-01-01

    Full Text Available The study of external renderings in the scope of conservation and restoration has acquired in the last years great methodological, scientific and technical advances. These renderings are important elements of the built structure, for besides possessing a protection function, they possess often a decorative function of great relevance for the image of the monument. The maintenance of these renderings implies the conservation of traditional constructive techniques and the use of compatible materials, as similar to the originals as possible. The main objective of this study is to define a methodology of conservative restoration using strategies of maintenance of renderings and traditional constructive techniques. The minimum intervention principle is maintained as well as the use of materials compatible with the original ones. This paper describes the technique and products used for the consolidation of the loss of cohesion. The testing campaign was developed under controlled conditions, in laboratory, and in situ in order to evaluate their efficacy for the consolidation of old renders. A set of tests is presented to evaluate the effectiveness of the process. The results are analysed and a reflection is added referring to the applicability of these techniques. Finally the paper presents a proposal for further research.

  7. D.Vanwijnsberghe, Autour de la Madeleine Renders

    Directory of Open Access Journals (Sweden)

    Muriel Verbeeck-Boutin

    2008-10-01

    Full Text Available Institution fédérale belge de réputation internationale, l’Institut royal du Patrimoine artistique, à Bruxelles,  célèbre cette année son soixantième anniversaire: c’est l’occasion de rappeler le prestige dont jouit depuis des décennies cet institut de recherche, de formation et de diffusion du savoir. Pour souligner l’événement, l’IRPA publie le quatrième volume de la collection Scientia Artis. Il présente sous le titre Autour de la Madeleine Renders un ensemble de recherches qui documentent...

  8. Volume Visualization and Compositing on Large-Scale Displays Using Handheld Touchscreen Interaction

    KAUST Repository

    Gastelum, Cristhopper Jacobo Armenta

    2011-07-27

    Advances in the physical sciences have progressively delivered ever increasing, already extremely large data sets to be analyzed. High performance volume rendering has become critical to the scientists for a better understanding of the massive amounts of data to be visualized. Cluster based rendering systems have become the base line to achieve the power and flexibility required to perform such task. Furthermore, display arrays have become the most suitable solution to display these data sets at their natural size and resolution which can be critical for human perception and evaluation. The work in this thesis aims at improving the scalability and usability of volume rendering systems that target visualization on display arrays. The first part deals with improving the performance by introducing the implementations of two parallel compositing algorithms for volume rendering: direct send and binary swap. The High quality Volume Rendering (HVR) framework has been extended to accommodate parallel compositing where previously only serial compositing was possible. The preliminary results show improvements in the compositing times for direct send even for a small number of processors. Unfortunately, the results of binary swap exhibit a negative behavior. This is due to the naive use of the graphics hardware blending mechanism. The expensive transfers account for the lengthy compositing times. The second part targets the development of scalable and intuitive interaction mechanisms. It introduces the development of a new client application for multitouch tablet devices, like the Apple iPad. The main goal is to provide the HVR framework, that has been extended to use tiled displays, a more intuitive and portable interaction mechanism that can get advantage of the new environment. The previous client is a PC application for the typical desktop settings that use a mouse and keyboard as sources of interaction. The current implementation of the client lets the user steer and

  9. Common crus aplasia: diagnosis by 3D volume rendering imaging using 3DFT-CISS sequence

    International Nuclear Information System (INIS)

    Kim, H.J.; Song, J.W.; Chon, K.-M.; Goh, E.-K.

    2004-01-01

    AIM: The purpose of this study was to evaluate the findings of three-dimensional (3D) volume rendering (VR) imaging in common crus aplasia (CCA) of the inner ear. MATERIALS AND METHODS: Using 3D VR imaging of temporal bone constructive interference in steady state (CISS) magnetic resonance (MR) images, we retrospectively reviewed seven inner ears of six children who were candidates for cochlear implants and who had been diagnosed with CCA. As controls, we used the same method to examine 402 inner ears of 201 patients who had no clinical symptoms or signs of sensorineural hearing loss. Temporal bone MR imaging (MRI) was performed with a 1.5 T MR machine using a CISS sequence, and VR of the inner ear was performed on a work station. Morphological image analysis was performed on rotation views of 3D VR images. RESULTS: In all seven cases, CCA was diagnosed by the absence of the common crus. The remaining superior semicircular canal (SCC) was normal in five and hypoplastic in two inner ears, while the posterior SCC was normal in all seven. One patient showed bilateral symmetrical CCA. Complicated combined anomalies were seen in the cochlea, vestibule and lateral SCC. CONCLUSION: 3D VR imaging findings with MR CISS sequence can directly diagnose CCA. This technique may be useful in delineating detailed anomalies of SCCs

  10. Transformative Rendering of Internet Resources

    Science.gov (United States)

    2012-10-01

    using either the Firefox or Google Chrome rendering engine. The rendering server then captures a screen shot of the page and creates code that positions...be compromised at web pages the hackers had built for that hacking competition to exploit that particular OS /browser configuration. During...of risk with no benefit. They include: - The rendering server is hosted on a Linux-based operating system ( OS ). The OS is much more secure than the

  11. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  12. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong

    2012-02-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists\\' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  13. Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.

    Science.gov (United States)

    Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres

    2018-02-01

    Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna

    2015-05-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. \\'output-sensitive\\' algorithms and system designs. This leads to recent output-sensitive approaches that are \\'ray-guided\\', \\'visualization-driven\\' or \\'display-aware\\'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  15. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter

    2015-01-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  16. Effectiveness of the random sequential absorption algorithm in the analysis of volume elements with nanoplatelets

    DEFF Research Database (Denmark)

    Pontefisso, Alessandro; Zappalorto, Michele; Quaresimin, Marino

    2016-01-01

    In this work, a study of the Random Sequential Absorption (RSA) algorithm in the generation of nanoplatelet Volume Elements (VEs) is carried out. The effect of the algorithm input parameters on the reinforcement distribution is studied through the implementation of statistical tools, showing...... that the platelet distribution is systematically affected by these parameters. The consequence is that a parametric analysis of the VE input parameters may be biased by hidden differences in the filler distribution. The same statistical tools used in the analysis are implemented in a modified RSA algorithm...

  17. Sketchy Rendering for Information Visualization

    NARCIS (Netherlands)

    Wood, Jo; Isenberg, Petra; Isenberg, Tobias; Dykes, Jason; Boukhelifa, Nadia; Slingsby, Aidan

    2012-01-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These

  18. NUMERICAL ALGORITHMS AT NON-ZERO CHEMICAL POTENTIAL. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 19

    International Nuclear Information System (INIS)

    Blum, T.; Creutz, M.

    1999-01-01

    The RIKEN BNL Research Center hosted its 19th workshop April 27th through May 1, 1999. The topic was Numerical Algorithms at Non-Zero Chemical Potential. QCD at a non-zero chemical potential (non-zero density) poses a long-standing unsolved challenge for lattice gauge theory. Indeed, it is the primary unresolved issue in the fundamental formulation of lattice gauge theory. The chemical potential renders conventional lattice actions complex, practically excluding the usual Monte Carlo techniques which rely on a positive definite measure for the partition function. This ''sign'' problem appears in a wide range of physical systems, ranging from strongly coupled electronic systems to QCD. The lack of a viable numerical technique at non-zero density is particularly acute since new exotic ''color superconducting'' phases of quark matter have recently been predicted in model calculations. A first principles confirmation of the phase diagram is desirable since experimental verification is not expected soon. At the workshop several proposals for new algorithms were made: cluster algorithms, direct simulation of Grassman variables, and a bosonization of the fermion determinant. All generated considerable discussion and seem worthy of continued investigation. Several interesting results using conventional algorithms were also presented: condensates in four fermion models, SU(2) gauge theory in fundamental and adjoint representations, and lessons learned from strong; coupling, non-zero temperature and heavy quarks applied to non-zero density simulations

  19. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    Science.gov (United States)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  20. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  1. Mathematical models for volume rendering and neutron transport

    International Nuclear Information System (INIS)

    Max, N.

    1994-09-01

    This paper reviews several different models for light interaction with volume densities of absorbing, glowing, reflecting, or scattering material. They include absorption only, glow only, glow and absorption combined, single scattering of external illumination, and multiple scattering. The models are derived from differential equations, and illustrated on a data set representing a cloud. They are related to corresponding models in neutron transport. The multiple scattering model uses an efficient method to propagate the radiation which does not suffer from the ray effect

  2. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    Science.gov (United States)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  3. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  4. CT portography by multidetector helical CT. Comparison of three rendering models

    International Nuclear Information System (INIS)

    Nakayama, Yoshiharu; Imuta, Masanori; Funama, Yoshinori; Kadota, Masataka; Utsunomiya, Daisuke; Shiraishi, Shinya; Hayashida, Yoshiko; Yamashita, Yasuyuki

    2002-01-01

    The purpose of this study was to assess the value of multidetector CT portography in visualizing varices and portosystemic collaterals in comparison with conventional portography, and to compare the visualizations obtained by three rendering models (volume rendering, VR; minimum intensity projection, MIP; and shaded surface display, SSD). A total of 46 patients with portal hypertension were examined by CT and conventional portography for evaluation of portosystemic collaterals. CT portography was performed by multidetector CT (MD-CT) scanner with a slice thickness of 2.5 mm and table feed of 7.5 mm. Three types of CT portographic models were generated and compared with transarterial portography. Among 46 patients, 48 collaterals were identified on CT transverse images, while 38 collaterals were detected on transarterial portography. Forty-four of 48 collaterals identified on CT transverse images were visualized with the MIP model, while 34 and 29 collaterals were visualized by the VR and SSD methods, respectively. The average CT value for the portal vein and varices was 198 HU with data acquisition of 50 sec after contrast material injection. CT portography by multidetector CT provides excellent images in the visualization of portosystemic collaterals. The images of collaterals produced by MD-CT are superior to those of transarterial portography. Among the three rendering techniques, MIP provides the best visualization of portosystemic collaterals. (author)

  5. CT portography by multidetector helical CT. Comparison of three rendering models

    Energy Technology Data Exchange (ETDEWEB)

    Nakayama, Yoshiharu; Imuta, Masanori; Funama, Yoshinori; Kadota, Masataka; Utsunomiya, Daisuke; Shiraishi, Shinya; Hayashida, Yoshiko; Yamashita, Yasuyuki [Kumamoto Univ. (Japan). School of Medicine

    2002-12-01

    The purpose of this study was to assess the value of multidetector CT portography in visualizing varices and portosystemic collaterals in comparison with conventional portography, and to compare the visualizations obtained by three rendering models (volume rendering, VR; minimum intensity projection, MIP; and shaded surface display, SSD). A total of 46 patients with portal hypertension were examined by CT and conventional portography for evaluation of portosystemic collaterals. CT portography was performed by multidetector CT (MD-CT) scanner with a slice thickness of 2.5 mm and table feed of 7.5 mm. Three types of CT portographic models were generated and compared with transarterial portography. Among 46 patients, 48 collaterals were identified on CT transverse images, while 38 collaterals were detected on transarterial portography. Forty-four of 48 collaterals identified on CT transverse images were visualized with the MIP model, while 34 and 29 collaterals were visualized by the VR and SSD methods, respectively. The average CT value for the portal vein and varices was 198 HU with data acquisition of 50 sec after contrast material injection. CT portography by multidetector CT provides excellent images in the visualization of portosystemic collaterals. The images of collaterals produced by MD-CT are superior to those of transarterial portography. Among the three rendering techniques, MIP provides the best visualization of portosystemic collaterals. (author)

  6. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.; Hijazi, Y.; Westerteiger, R.; Schott, M.; Hansen, C.; Hagen, H.

    2009-01-01

    classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches

  7. RenderGAN: Generating Realistic Labeled Data

    Directory of Open Access Journals (Sweden)

    Leon Sixt

    2018-06-01

    Full Text Available Deep Convolutional Neuronal Networks (DCNNs are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g., lighting, background, and detail are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.

  8. Physically based rendering from theory to implementation

    CERN Document Server

    Pharr, Matt

    2010-01-01

    "Physically Based Rendering, 2nd Edition" describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation. A method - known as 'literate programming'- combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, you will learn to design and employ a full-featured rendering system for creating stunning imagery. This book features new sections on subsurface scattering, Metropolis light transport, precomputed light transport, multispectral rendering, and much more. It includes a companion site complete with source code for the rendering system described in the book, with support for Windows, OS X, and Linux. Code and text are tightly woven together through a unique indexing feature that lists each function, variable, and method on the page that they are first described.

  9. Virtual Whipple: preoperative surgical planning with volume-rendered MDCT images to identify arterial variants relevant to the Whipple procedure.

    Science.gov (United States)

    Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B

    2007-05-01

    The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.

  10. Detection of Prion Proteins and TSE Infectivity in the Rendering and Biodiesel Manufacture Processes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R.; Keller, B.; Oleschuk, R. [Queen' s University, Kingston, Ontario (Canada)

    2007-03-15

    This paper addresses emerging issues related to monitoring prion proteins and TSE infectivity in the products and waste streams of rendering and biodiesel manufacture processes. Monitoring is critical to addressing the knowledge gaps identified in 'Biodiesel from Specified Risk Material Tallow: An Appraisal of TSE Risks and their Reduction' (IEA's AMF Annex XXX, 2006) that prevent comprehensive risk assessment of TSE infectivity in products and waste. The most important challenge for monitoring TSE risk is the wide variety of sample types, which are generated at different points in the rendering/biodiesel production continuum. Conventional transmissible spongiform encephalopathy (TSE) assays were developed for specified risk material (SRM) and other biological tissues. These, however, are insufficient to address the diverse sample matrices produced in rendering and biodiesel manufacture. This paper examines the sample types expected in rendering and biodiesel manufacture and the implications of applying TSE assay methods to them. The authors then discuss a sample preparation filtration, which has not yet been applied to these sample types, but which has the potential to provide or significantly improve TSE monitoring. The main improvement will come from transfer of the prion proteins from the sample matrix to a matrix compatible with conventional and emerging bioassays. A second improvement will come from preconcentrating the prion proteins, which means transferring proteins from a larger sample volume into a smaller volume for analysis to provide greater detection sensitivity. This filtration method may also be useful for monitoring other samples, including wash waters and other waste streams, which may contain SRM, including those from abattoirs and on-farm operations. Finally, there is a discussion of emerging mass spectrometric methods, which Prusiner and others have shown to be suitable for detection and characterisation of prion proteins (Stahl

  11. Advanced Material Rendering in Blender

    Czech Academy of Sciences Publication Activity Database

    Hatka, Martin; Haindl, Michal

    2012-01-01

    Roč. 11, č. 2 (2012), s. 15-23 ISSN 1081-1451 R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; CESNET(CZ) 409/2011 Institutional support: RVO:67985556 Keywords : realistic material rendering * bidirectional texture function * Blender Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/haindl-advanced material rendering in blender.pdf

  12. RenderSelect: a Cloud Broker Framework for Cloud Renderfarm Services

    OpenAIRE

    Ruby, Annette J; Aisha, Banu W; Subash, Chandran P

    2016-01-01

    In the 3D studios the animation scene files undergo a process called as rendering, where the 3D wire frame models are converted into 3D photorealistic images. As the rendering process is both a computationally intensive and a time consuming task, the cloud services based rendering in cloud render farms is gaining popularity among the animators. Though cloud render farms offer many benefits, the animators hesitate to move from their traditional offline rendering to cloud services based render ...

  13. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    Science.gov (United States)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  14. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    Science.gov (United States)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  15. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  16. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function.

    Science.gov (United States)

    Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-08-15

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.

  17. Earth mortars and earth-lime renders

    Directory of Open Access Journals (Sweden)

    Maria Fernandes

    2008-01-01

    Full Text Available Earth surface coatings play a decorative architectural role, apart from their function as wall protection. In Portuguese vernacular architecture, earth mortars were usually applied on stone masonry, while earth renders and plasters were used on indoors surface coatings. Limestone exists only in certain areas of the country and consequently lime was not easily available everywhere, especially on granite and schist regions where stone masonry was a current building technique. In the central west coast of Portugal, the lime slaking procedure entailed slaking the quicklime mixed with earth (sandy soil, in a pit; the resulting mixture would then be combined in a mortar or plaster. This was also the procedure for manufactured adobes stabilized with lime. Adobe buildings with earth-lime renderings and plasters were also traditional in the same region, using lime putty and lime wash for final coat and decoration. Classic decoration on earth architecture from the 18th-19th century was in many countries a consequence of the François Cointeraux (1740-1830 manuals - Les Cahiers d'Architecture Rurale" (1793 - a French guide for earth architecture and building construction. This manual arrived to Portugal in the beginning of XIX century, but was never translated to Portuguese. References about decoration for earth houses were explained on this manual, as well as procedures about earth-lime renders and ornamentation of earth walls; in fact, these procedures are exactly the same as the ones used in adobe buildings in this Portuguese region. The specific purpose of the present paper is to show some cases of earth mortars, renders and plasters on stone buildings in Portugal and to explain the methods of producing earth-lime renders, and also to show some examples of rendering and coating with earth-lime in Portuguese adobe vernacular architecture.

  18. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  19. Non-Photorealistic Rendering in Chinese Painting of Animals

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A set of algorithms is proposed in this paper to automatically transform 3D animal models to Chinese painting style. Inspired by real painting process in Chinese painting of animals, we divide the whole rendering process into two parts: borderline stroke making and interior shading. In borderline stroke making process we first find 3D model silhouettes in real-time depending on the viewing direction of a user. After retrieving silhouette information from all model edges, a stroke linking mechanism is applied to link these independent edges into a long stroke. Finally we grow a plain thin silhouette line to a stylus stroke with various widths at each control point and a 2D brush model is combined with it to simulate a Chinese painting stroke. In the interior shading pipeline, three stages are used to convert a Gouraud-shading image to a Chinese painting style image: color quantization, ink diffusion and box filtering. The color quantization stage assigns all pixels in an image into four color levels and each level represents a color layer in a Chinese painting. Ink diffusion stage is used to transfer inks and water between different levels and to grow areas in an irregular way. The box filtering stage blurs sharp borders between different levels to embellish the appearance of final interior shading image. In addition to automatic rendering, an interactive Chinese painting system which is equipped with friendly input devices can be also combined to generate more artistic Chinese painting images manually.

  20. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  1. A parallel coordinates style interface for exploratory volume visualization.

    Science.gov (United States)

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  2. 3D virtual rendering in thoracoscopic treatment of congenital malformation of the lung

    Directory of Open Access Journals (Sweden)

    Destro F.

    2013-10-01

    Full Text Available Introduction: Congenital malformations of the lung (CML are rare but potentially dangerous congenital malformations. Their identification is important in order to define the most appropriate management. Materials and methods: We retrospectively reviewed data from 37 patients affected by CML treated in our Pediatric Surgery Unit in the last four years with minimally invasive surgery (MIS. Results: Prenatal diagnosis was possible in 26/37 patients. Surgery was performed in the first month of life in 3 symptomatic patients and between 6 and 12 months in the others. All patients underwent radiological evaluation prior to thoracoscopic surgery. Images collected were reconstructed using the VR render software. Discussion and conclusions: Volume rendering gives high anatomical resolution and it can be useful to guide the surgical procedure. Thoracoscopy should be the technique of choice because it is safe, effective and feasible. Furthermore it has the benefit of a minimal access technique and it can be easily performed in children.

  3. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  4. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    Science.gov (United States)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can

  5. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    International Nuclear Information System (INIS)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao

    2013-01-01

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  6. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Zhi-Jun [Dept. of Radiology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)], e-mail: Gzj3@163.com; Lin, Qiang [Dept. of Oncology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China); Liu, Hai-Tao [Dept. of General Surgery, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)] [and others])

    2013-09-15

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  7. Rendering the Topological Spines

    Energy Technology Data Exchange (ETDEWEB)

    Nieves-Rivera, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-05

    Many tools to analyze and represent high dimensional data already exits yet most of them are not flexible, informative and intuitive enough to help the scientists make the corresponding analysis and predictions, understand the structure and complexity of scientific data, get a complete picture of it and explore a greater number of hypotheses. With this in mind, N-Dimensional Data Analysis and Visualization (ND²AV) is being developed to serve as an interactive visual analysis platform with the purpose of coupling together a number of these existing tools that range from statistics, machine learning, and data mining, with new techniques, in particular with new visualization approaches. My task is to create the rendering and implementation of a new concept called topological spines in order to extend ND²AV's scope. Other existing visualization tools create a representation preserving either the topological properties or the structural (geometric) ones because it is challenging to preserve them both simultaneously. Overcoming such challenge by creating a balance in between them, the topological spines are introduced as a new approach that aims to preserve them both. Its render using OpenGL and C++ and is currently being tested to further on be implemented on ND²AV. In this paper I will present what are the Topological Spines and how they are rendered.

  8. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  9. Effect of the averaging volume and algorithm on the in situ electric field for uniform electric- and magnetic-field exposures

    International Nuclear Information System (INIS)

    Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu

    2010-01-01

    The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)

  10. An adaptive occlusion culling algorithm for use in large ves

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2000-01-01

    The Hierarchical Occlusion Map algorithm is combined with Frustum Slicing to give a simpler occlusion-culling algorithm that more adequately caters to large, open VEs. The algorithm adapts to the level of visual congestion and is well suited for use with large, complex models with long mean free ...... line of sight ('the great outdoors'), models for which it is not feasible to construct, or search, a database of occluders to be rendered each frame....

  11. Moisture movements in render on brick wall

    DEFF Research Database (Denmark)

    Hansen, Kurt Kielsgaard; Munch, Thomas Astrup; Thorsen, Peter Schjørmann

    2003-01-01

    A three-layer render on brick wall used for building facades is studied in the laboratory. The vertical render surface is held in contact with water for 24 hours simulating driving rain while it is measured with non-destructive X-ray equipment every hour in order to follow the moisture front...

  12. Value of 3D-Volume Rendering in the Assessment of Coronary Arteries with Retrospectively Ecg-Gated Multislice Spiral CT

    International Nuclear Information System (INIS)

    Mahnken, A.H.; Wildberger, J.E.; Dedden, K.; Schmitz-Rode, T.; Guenther, R.W.; Sinha, A.M.; Hoffmann, R.; Stanzel, S.

    2003-01-01

    Purpose: To assess the diagnostic value and measurement precision of 3D volume rendering technique (3D-VRT) from retrospectively ECG-gated multislice spiral CT (MSCT) data sets for imaging of the coronary arteries. Material and Methods: In 35 patients, retrospectively ECG-gated MSCT of the heart using a four detector row MSCT scanner with a standardized examination protocol was performed as well as quantitative X-ray coronary angiography (QCA). The MSCT data was assessed on segmental basis using 3D-VRT exclusively. The coronary artery diameters were measured at the origin of each main coronary branch and 1 cm, 3 cm and 5 cm distally. The minimum, maximum and mean diameters were determined from MSCT angiography and compared to QCA. Results: A total of 353 of 525 (67.2%) coronary artery segments were assessable by MSCT angiography. The proximal segments were more often assessable when compared to the distal segments. Stenoses were detected with a sensitivity of 82.6% and a specificity of 92.8%. According to the Bland-Altman method the mean differences between QCA and MSCT ranged from 0.55 to 1.07 mm with limits of agreement from 2.2 mm to 2.7 mm. Conclusion: When compared to QCA, the ability of 3D-VRT to quantitatively assess coronary artery diameters and coronary artery stenoses is insufficient for clinical purposes

  13. Sketchy Rendering for Information Visualization.

    Science.gov (United States)

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  14. [Rendering surgical care to wounded with neck wounds in an armed conflict].

    Science.gov (United States)

    Samokhvalov, I M; Zavrazhnov, A A; Fakhrutdinov, A M; Sychev, M I

    2001-10-01

    The results of rendering of the medical care (the first aid, qualified and specialized) obtained in 172 servicemen with neck injuries who stayed in Republic of Chechnya during the period from 09.08.1999 to 28.07.2000 were analyzed. Basing on the results of analysis and experience of casualties' treatment the authors discuss the problems of sequence and volume of surgical care in this group of casualties with reference to available medical evacuation system, surgical tactics at the stage of specialized care. They also consider the peculiarities of operative treatment of the casualties with neck injuries.

  15. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  16. Standardized rendering from IR surveillance motion imagery

    Science.gov (United States)

    Prokoski, F. J.

    2014-06-01

    Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.

  17. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  18. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, V.S.; Bedford, J.L.; Webb, S.; Dearnaley, D.P.

    1997-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three dimensional (3D) margin growing algorithm compared to a two dimensional (2D) margin growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of ten patients with localized prostate cancer: prostate gland only (PO) and prostate with seminal vesicles (PSV). A margin of 10 mm was applied to these two groups (PO and PSV) using both the 2D and 3D margin growing algorithms. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. Adequacy of geometric coverage of the GTV with the two algorithms was examined throughout the target volume. Discrepancies between the two margin methods were measured in the transaxial plane. Results: The 2D algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D algorithm. For both the PO and PSV groups, the inferior coverage of the PTV was consistently underestimated by the 2D margin algorithm when compared to the 3D margins with a mean radial distance of 4.8 mm (range 0-10). In the central region of the prostate gland, the anterior, posterior, and lateral PTV borders were underestimated with the 2D margin in both the PO and PSV groups by a mean of 3.6 mm (range 0-9), 2.1 mm (range 0-8), and 1.8 (range 0-9) respectively. The PTV coverage of the PO group superiorly was radially underestimated by 4.5mm (range 0-14) when comparing the 2D margins to the 3D margins. For the PSV group, the junction region between the prostate and the seminal vesicles was underestimated by the 2D margin by a mean transaxial distance of 18.1 mm in the anterior PTV border (range 4-30), 7.2 mm posteriorly (range 0-20), and 3.7 mm laterally (range 0-14). The superior region of the seminal vesicles in the PSV group was also consistently underestimated with a radial discrepancy of 3.3 mm

  19. GPU Pro 5 advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2014-01-01

    In GPU Pro5: Advanced Rendering Techniques, section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Michal Valient, Wessam Bahnassi, and Marius Bjorge have once again assembled a high-quality collection of cutting-edge techniques for advanced graphics processing unit (GPU) programming. Divided into six sections, the book covers rendering, lighting, effects in image space, mobile devices, 3D engine design, and compute. It explores rasterization of liquids, ray tracing of art assets that would otherwise be used in a rasterized engine, physically based area lights, volumetric light

  20. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  1. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  2. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  3. An algorithm based on OmniView technology to reconstruct sagittal and coronal planes of the fetal brain from volume datasets acquired by three-dimensional ultrasound.

    Science.gov (United States)

    Rizzo, G; Capponi, A; Pietrolucci, M E; Capece, A; Aiello, E; Mammarella, S; Arduini, D

    2011-08-01

    To describe a novel algorithm, based on the new display technology 'OmniView', developed to visualize diagnostic sagittal and coronal planes of the fetal brain from volumes obtained by three-dimensional (3D) ultrasonography. We developed an algorithm to image standard neurosonographic planes by drawing dissecting lines through the axial transventricular view of 3D volume datasets acquired transabdominally. The algorithm was tested on 106 normal fetuses at 18-24 weeks of gestation and the visualization rates of brain diagnostic planes were evaluated by two independent reviewers. The algorithm was also applied to nine cases with proven brain defects. The two reviewers, using the algorithm on normal fetuses, found satisfactory images with visualization rates ranging between 71.7% and 96.2% for sagittal planes and between 76.4% and 90.6% for coronal planes. The agreement rate between the two reviewers, as expressed by Cohen's kappa coefficient, was > 0.93 for sagittal planes and > 0.89 for coronal planes. All nine abnormal volumes were identified by a single observer from among a series including normal brains, and eight of these nine cases were diagnosed correctly. This novel algorithm can be used to visualize standard sagittal and coronal planes in the fetal brain. This approach may simplify the examination of the fetal brain and reduce dependency of success on operator skill. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  4. Innovative Lime Pozzolana Renders for Reconstruction of Historical Buildings

    International Nuclear Information System (INIS)

    Vejmelkova, E.; Maca, P.; Konvalinka, P.; Cerny, R.

    2011-01-01

    Bulk density, matrix density, open porosity, compressive strength, bending strength, water sorptivity, moisture diffusivity, water vapor diffusion coefficient, thermal conductivity, specific heat capacity and thermal diffusivity of two innovative renovation renders on limepozzolana basis are analyzed. The obtained results are compared with reference lime plaster and two commercial renovation renders, and conclusions on the applicability of the particular renders in practical reconstruction works are drawn. (author)

  5. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Science.gov (United States)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel

  6. A stable algorithm for calculating phase equilibria with capillarity at specified moles, volume and temperature using a dynamic model

    KAUST Repository

    Kou, Jisheng

    2017-09-30

    Capillary pressure can significantly affect the phase properties and flow of liquid-gas fluids in porous media, and thus, the phase equilibrium calculation incorporating capillary pressure is crucial to simulate such problems accurately. Recently, the phase equilibrium calculation at specified moles, volume and temperature (NVT-flash) becomes an attractive issue. In this paper, capillarity is incorporated into the phase equilibrium calculation at specified moles, volume and temperature. A dynamical model for such problem is developed for the first time by using the laws of thermodynamics and Onsager\\'s reciprocal principle. This model consists of the evolutionary equations for moles and volume, and it can characterize the evolutionary process from a non-equilibrium state to an equilibrium state in the presence of capillarity effect at specified moles, volume and temperature. The phase equilibrium equations are naturally derived. To simulate the proposed dynamical model efficiently, we adopt the convex-concave splitting of the total Helmholtz energy, and propose a thermodynamically stable numerical algorithm, which is proved to preserve the second law of thermodynamics at the discrete level. Using the thermodynamical relations, we derive a phase stability condition with capillarity effect at specified moles, volume and temperature. Moreover, we propose a stable numerical algorithm for the phase stability testing, which can provide the feasible initial conditions. The performance of the proposed methods in predicting phase properties under capillarity effect is demonstrated on various cases of pure substance and mixture systems.

  7. Value of three-dimensional volume rendering images in the assessment of the centrality index for preoperative planning in patients with renal masses.

    Science.gov (United States)

    Sofia, C; Magno, C; Silipigni, S; Cantisani, V; Mucciardi, G; Sottile, F; Inferrera, A; Mazziotti, S; Ascenti, G

    2017-01-01

    To evaluate the precision of the centrality index (CI) measurement on three-dimensional (3D) volume rendering technique (VRT) images in patients with renal masses, compared to its standard measurement on axial images. Sixty-five patients with renal lesions underwent contrast-enhanced multidetector (MD) computed tomography (CT) for preoperative imaging. Two readers calculated the CI on two-dimensional axial images and on VRT images, measuring it in the plane that the tumour and centre of the kidney were lying in. Correlation and agreement of interobserver measurements and inter-method results were calculated using intraclass correlation (ICC) coefficients and the Bland-Altman method. Time saving was also calculated. The correlation coefficients were r=0.99 (ppresent study showed that VRT and axial images produce almost identical values of CI, with the advantages of greater ease of execution and a time saving of almost 50% for 3D VRT images. In addition, VRT provides an integrated perspective that can better assist surgeons in clinical decision making and in operative planning, suggesting this technique as a possible standard method for CI measurement. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  8. Validation of a colour rendering index based on memory colours

    OpenAIRE

    Smet, Kevin; Jost-Boissard, Sophie; Ryckaert, Wouter; Deconinck, Geert; Hanselaer, Peter

    2010-01-01

    In this paper the performance of a colour rendering index based on memory colours is investigated in comparison with the current CIE Colour Rendering Index, the NIST Colour Quality Scale and visual appreciation results obtained at CNRS at Lyon University for a set of 3000K and 4000K LED light sources. The Pearson and Spearman correlation coefficients between each colour rendering metric and the two sets of visual results were calculated. It was found that the memory colour based colour render...

  9. Media Presentation Synchronisation for Non-monolithic Rendering Architectures

    NARCIS (Netherlands)

    I. Vaishnavi (Ishan); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago); B. Gao (Bo)

    2007-01-01

    htmlabstractNon-monolithic renderers are physically distributed media playback engines. Non-monolithic renderers may use a number of different underlying network connection types to transmit media items belonging to a presentation. There is therefore a need for a media based and inter-network- type

  10. Single minimum incision endoscopic radical nephrectomy for renal tumors with preoperative virtual navigation using 3D-CT volume-rendering

    Directory of Open Access Journals (Sweden)

    Shioyama Yasukazu

    2010-04-01

    Full Text Available Abstract Background Single minimum incision endoscopic surgery (MIES involves the use of a flexible high-definition laparoscope to facilitate open surgery. We reviewed our method of radical nephrectomy for renal tumors, which is single MIES combined with preoperative virtual surgery employing three-dimensional CT images reconstructed by the volume rendering method (3D-CT images in order to safely and appropriately approach the renal hilar vessels. We also assessed the usefulness of 3D-CT images. Methods Radical nephrectomy was done by single MIES via the translumbar approach in 80 consecutive patients. We performed the initial 20 MIES nephrectomies without preoperative 3D-CT images and the subsequent 60 MIES nephrectomies with preoperative 3D-CT images for evaluation of the renal hilar vessels and the relation of each tumor to the surrounding structures. On the basis of the 3D information, preoperative virtual surgery was performed with a computer. Results Single MIES nephrectomy was successful in all patients. In the 60 patients who underwent 3D-CT, the number of renal arteries and veins corresponded exactly with the preoperative 3D-CT data (100% sensitivity and 100% specificity. These 60 nephrectomies were completed with a shorter operating time and smaller blood loss than the initial 20 nephrectomies. Conclusions Single MIES radical nephrectomy combined with 3D-CT and virtual surgery achieved a shorter operating time and less blood loss, possibly due to safer and easier handling of the renal hilar vessels.

  11. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  12. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  13. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  14. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.; Martin, Tobias; Grosset, A. V Pascal; Brownlee, Carson; Hollt, Thomas; Brown, Benjamin P.; Smith, Sean T.; Hansen, Charles D.

    2012-01-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  15. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.

    2012-02-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  16. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  17. Blender cycles lighting and rendering cookbook

    CERN Document Server

    Iraci, Bernardo

    2013-01-01

    An in-depth guide full of step-by-step recipes to explore the concepts behind the usage of Cycles. Packed with illustrations, and lots of tips and tricks; the easy-to-understand nature of the book will help the reader understand even the most complex concepts with ease.If you are a digital artist who already knows your way around Blender, and you want to learn about the new Cycles' rendering engine, this is the book for you. Even experts will be able to pick up new tips and tricks to make the most of the rendering capabilities of Cycles.

  18. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, Vincent S.; Bedford, James L.; Webb, Steve; Dearnaley, David P.

    1998-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three-dimensional (3D) margin-growing algorithm compared to a two-dimensional (2D) margin-growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of 10 patients with localized prostate cancer; prostate gland only (PO) and prostate with seminal vesicles (PSV). A predetermined margin of 10 mm was applied to these two groups (PO and PSV) using both 2D and 3D margin-growing algorithms. The 2D algorithm added a transaxial margin to each GTV slice, whereas the 3D algorithm added a volumetric margin all around the GTV. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. The adequacy of geometric coverage of the GTV by the two algorithms was examined in a series of transaxial planes throughout the target volume. Results: The 2D margin-growing algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D-margin algorithm. For the PO group, the mean transaxial difference between the 2D and 3D algorithm was 3.8 mm inferiorly (range 0-20), 1.8 mm centrally (range 0-9), and 4.4 mm superiorly (range 0-22). Considering all of these regions, the mean discrepancy anteriorly was 5.1 mm (range 0-22), posteriorly 2.2 (range 0-20), right border 2.8 mm (range 0-14), and left border 3.1 mm (range 0-12). For the PSV group, the mean discrepancy in the inferior region was 3.8 mm (range 0-20), central region of the prostate was 1.8 mm ( range 0-9), the junction region of the prostate and the seminal vesicles was 5.5 mm (range 0-30), and the superior region of the seminal vesicles was 4.2 mm (range 0-55). When the different borders were considered in the PSV group, the mean discrepancies for the anterior, posterior, right, and left borders were 6.4 mm (range 0-55), 2.5 mm (range 0-20), 2.6 mm (range 0-14), and 3

  19. Real-Time Location-Based Rendering of Urban Underground Pipelines

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its objects onto the real world. Using AR, underground pipelines can be displayed accurately, intuitively, and in real time. We analyzed the characteristics of AR and their application in underground pipeline management. We mainly focused on the AR pipeline rendering procedure based on the BeiDou Navigation Satellite System (BDS and simultaneous localization and mapping (SLAM technology. First, in aiming to improve the spatial accuracy of pipeline rendering, we used differential corrections received from the Ground-Based Augmentation System to compute the precise coordinates of users in real time, which helped us accurately retrieve and draw pipelines near the users, and by scene recognition the accuracy can be further improved. Second, in terms of pipeline rendering, we used Visual-Inertial Odometry (VIO to track the rendered objects and made some improvements to visual effects, which can provide steady dynamic tracking of pipelines even in relatively markerless environments and outdoors. Finally, we used the occlusion method based on real-time 3D reconstruction to realistically express the immersion effect of underground pipelines. We compared our methods to the existing methods and concluded that the method proposed in this research improves the spatial accuracy of pipeline rendering and the portability of the equipment. Moreover, the updating of our rendering procedure corresponded with the moving of the user’s location, thus we achieved a dynamic rendering of pipelines in the real environment.

  20. 3D cinematic rendering of the calvarium, maxillofacial structures, and skull base: preliminary observations.

    Science.gov (United States)

    Rowe, Steven P; Zinreich, S James; Fishman, Elliot K

    2018-06-01

    Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.

  1. Particle-based non-photorealistic volume visualization

    NARCIS (Netherlands)

    Busking, S.; Vilanova, A.; Van Wijk, J.J.

    2007-01-01

    Non-photorealistic techniques are usually applied to produce stylistic renderings. In visualization, these techniques are often able to simplify data, producing clearer images than traditional visualization methods. We investigate the use of particle systems for visualizing volume datasets using

  2. Particle-based non-photorealistic volume visualization

    NARCIS (Netherlands)

    Busking, S.; Vilanova, A.; Wijk, van J.J.

    2008-01-01

    Non-photorealistic techniques are usually applied to produce stylistic renderings. In visualization, these techniques are often able to simplify data, producing clearer images than traditional visualization methods. We investigate the use of particle systems for visualizing volume datasets using

  3. Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters

    International Nuclear Information System (INIS)

    Humphreys, G.; Houston, M.; Ng, Y.-R.; Frank, R.; Ahern, S.; Kirchner, P.D.; Klosowski, J.T.

    2002-01-01

    We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments

  4. The algorithms and principles of non-photorealistic graphics

    CERN Document Server

    Geng, Weidong

    2011-01-01

    ""The Algorithms and Principles of Non-photorealistic Graphics: Artistic Rendering and Cartoon Animation"" provides a conceptual framework for and comprehensive and up-to-date coverage of research on non-photorealistic computer graphics including methodologies, algorithms and software tools dedicated to generating artistic and meaningful images and animations. This book mainly discusses how to create art from a blank canvas, how to convert the source images into pictures with the desired visual effects, how to generate artistic renditions from 3D models, how to synthesize expressive pictures f

  5. The PHMC algorithm for simulations of dynamical fermions; 1, description and properties

    CERN Document Server

    Frezzotti, R

    1999-01-01

    We give a detailed description of the so-called Polynomial Hybrid Monte Carlo (PHMC) algorithm. The effects of the correction factor, which is introduced to render the algorithm exact, are discussed, stressing their relevance for the statistical fluctuations and (almost) zero mode contributions to physical observables. We also investigate rounding-error effects and propose several ways to reduce memory requirements.

  6. Assessment of left ventricular function and volumes by myocardial perfusion scintigraphy - comparison of two algorithms

    International Nuclear Information System (INIS)

    Zajic, T.; Fischer, R.; Brink, I.; Moser, E.; Krause, T.; Saurbier, B.

    2001-01-01

    Aim: Left ventricular volume and function can be computed from gated SPECT myocardial perfusion imaging using emory cardiac toolbox (ECT) or gated SPECT quantification (GS-Quant). The aim of this study was to compare both programs with respect to their practical application, stability and precision on heart-models as well as in clinical use. Methods: The volumes of five cardiac models were calculated by ECT and GS-Quant. 48 patients (13 female, 35 male) underwent a one day stress-rest protocol and gated SPECT. From these 96 gated SPECT images, left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV) were estimated by ECT and GS-Quant. For 42 patients LVEF was also determined by echocardiography. Results: For the cardiac models the computed volumes showed high correlation with the model-volumes as well as high correlation between ECT and GS-Quant (r ≥0.99). Both programs underestimated the volume by approximately 20-30% independent of the ventricle-size. Calculating LVEF, EDV and ESV, GS-Quant and ECT correlated well to each other and to the LVEF estimated by echocardiography (r ≥0.86). LVEF values determined with ECT were about 10% higher than values determined with GS-Quant or echocardiography. The incorrect surfaces calculated by the automatic algorithm of GS-Quant for three examinations could not be corrected manually. 34 of the ECT studies were optimized by the operator. Conclusion: GS-Quant and ECT are two reliable programs in estimating LVEF. Both seem to underestimate the cardiac volume. In practical application GS-Quant was faster and easier to use. ECT allows the user to define the contour of the ventricle and thus is less susceptible to artifacts. (orig.) [de

  7. Emission of VOC's from modified rendering process

    International Nuclear Information System (INIS)

    Bhatti, Z.A.; Raja, I.A.; Saddique, M.; Langenhove, H.V.

    2005-01-01

    Rendering technique for processing of dead animal and slaughterhouse wastes into valuable products. It involves cooking of raw material and later Sterilization was added to reduce the Bovine Spongiform Encephalopathy (BSE). Studies have been carried out on rendering emission, with the normal cooking process. Our study shows, that the sterilization step in rendering process increases the emission of volatile organic compounds (VOC's). Gas samples, containing VOC's, were analyzed by the GC/MS (Gas Chromatograph and Mass Spectrometry). The most important groups of compounds- alcohols and cyclic hydrocarbons were identified. In the group of alcohol; 1-butanol, l-pentanol and l-hexanol compounds were found while in the group of cyclic hydrocarbon; methyl cyclopentane and cyclohexane compounds were detected. Other groups like aldehyde, sulphur containing compounds, ketone and furan were also found. Some compounds, like l-pentanol, 2-methyl propanal, dimethyl disulfide and dimethyl trisulfide, which belong to these groups, cause malodor. It is important to know these compounds to treat odorous gasses. (author)

  8. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    Science.gov (United States)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); hide

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  9. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing; Zhang, Weigong; Fu, Xin

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPU for efficient 3D rendering.

  10. RAY TRACING RENDER MENGGUNAKAN FRAGMENT ANTI ALIASING

    Directory of Open Access Journals (Sweden)

    Febriliyan Samopa

    2008-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Rendering is generating surface and three-dimensional effects on an object displayed on a monitor screen. Ray tracing as a rendering method that traces ray for each image pixel has a drawback, that is, aliasing (jaggies effect. There are some methods for executing anti aliasing. One of those methods is OGSS (Ordered Grid Super Sampling. OGSS is able to perform aliasing well. However, this method requires more computation time since sampling of all pixels in the image will be increased. Fragment Anti Aliasing (FAA is a new alternative method that can cope with the drawback. FAA will check the image when performing rendering to a scene. Jaggies effect is only happened at curve and gradient object. Therefore, only this part of object that will experience sampling magnification. After this sampling magnification and the pixel values are computed, then downsample is performed to retrieve the original pixel values. Experimental results show that the software can implement ray tracing well in order to form images, and it can implement FAA and OGSS technique to perform anti aliasing. In general, rendering using FAA is faster than using OGSS

  11. Digital color acquisition, perception, coding and rendering

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    In this book the authors identify the basic concepts and recent advances in the acquisition, perception, coding and rendering of color. The fundamental aspects related to the science of colorimetry in relation to physiology (the human visual system) are addressed, as are constancy and color appearance. It also addresses the more technical aspects related to sensors and the color management screen. Particular attention is paid to the notion of color rendering in computer graphics. Beyond color, the authors also look at coding, compression, protection and quality of color images and videos.

  12. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  13. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  14. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    Science.gov (United States)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  15. Haptic rendering for simulation of fine manipulation

    CERN Document Server

    Wang, Dangxiao; Zhang, Yuru

    2014-01-01

    This book introduces the latest progress in six degrees of freedom (6-DoF) haptic rendering with the focus on a new approach for simulating force/torque feedback in performing tasks that require dexterous manipulation skills. One of the major challenges in 6-DoF haptic rendering is to resolve the conflict between high speed and high fidelity requirements, especially in simulating a tool interacting with both rigid and deformable objects in a narrow space and with fine features. The book presents a configuration-based optimization approach to tackle this challenge. Addressing a key issue in man

  16. North American Rendering: processing high quality protein and fats for feed North American Rendering: processamento de proteínas e gorduras de alta qualidade para alimentos para animais

    Directory of Open Access Journals (Sweden)

    David L. Meeker

    2009-07-01

    Full Text Available One third to one half of each animal produced for meat, milk, eggs, and fiber is not consumed by humans. These raw materials are subjected to rendering processes resulting in many useful products. Meat and bone meal, meat meal, poultry meal, hydrolyzed feather meal, blood meal, fish meal, and animal fats are the primary products resulting from the rendering process. The most important and valuable use for these animal by-products is as feed ingredients for livestock, poultry, aquaculture, and companion animals. There are volumes of scientific references validating the nutritional qualities of these products, and there are no scientific reasons for altering the practice of feeding rendered products to animals. Government agencies regulate the processing of food and feed, and the rendering industry is scrutinized often. In addition, industry programs include good manufacturing practices, HACCP, Codes of Practice, and third-party certification. The rendering industry clearly understands its role in the safe and nutritious production of animal feed ingredients and has done it very effectively for over 100 years. The availability of rendered products for animal feeds in the future depends on regulation and the market. Regulatory agencies will determine whether certain raw materials can be used for animal feed. The National Renderers Association (NRA supports the use of science as the basis for regulation while aesthetics, product specifications, and quality differences should be left to the market place. Without the rendering industry, the accumulation of unprocessed animal by-products would impede the meat industries and pose a serious potential hazard to animal and human health.De um terço a metade da produção animal para carne, leite, ovos e fibra, não são consumidos pelos seres humanos. Estes materiais não consumidos são sujeitos a processamento em graxarias e indústrias de alimentos de origem animal, resultando em uma série de produtos

  17. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  18. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  19. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    Science.gov (United States)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  20. Contrast-enhanced computed tomography angiography and volume-rendered imaging for evaluation of cellophane banding in a dog with extrahepatic portosystemic shunt

    Directory of Open Access Journals (Sweden)

    H. Yoon

    2011-04-01

    Full Text Available A 4-year-old, 1.8 kg, male, castrated Maltese was presented for evaluation of urolithiasis. Urinary calculi were composed of ammonium biurate. Preprandial and postprandial bile acids were 44.2 and 187.3 μmol/ , respectively (reference ranges 0–10 and 0–20 μmol/ , respectively. Single-phase contrast-enhanced computed tomography angiography (CTA with volume-rendered imaging (VRI was obtained. VRI revealed a portocaval shunt originating just cranial to a tributary of the gastroduodenal vein and draining into the caudal vena cava at the level of the epiploic foramen. CTA revealed a 3.66 mm-diameter shunt measured at the level of the termination of the shunt and a 3.79 mm-diameter portal vein measured at the level between the origin of the shunt and the porta of the liver. Surgery was performed using cellophane banding without attenuation. Follow-up single-phase CTA with VRI was obtained 10 weeks after surgery. VRI revealed no evidence of portosystemic communication on the level of a cellophane band and caudal to the cellophane band. CTA demonstrated an increased portal vein diameter (3.79–5.27 mm measured at the level between the origin of the shunt and the porta of the liver. Preprandial and postprandial bile acids were 25 and 12.5 μmol/ , respectively (aforementioned respective reference ranges, 3 months post-surgery. No problems were evident at 6 months.

  1. Contrast-enhanced MDCT gastrography for detection of early gastric cancer: Initial assessment of “wall-carving image”, a novel volume rendering technique

    International Nuclear Information System (INIS)

    Komori, Masahiro; Kawanami, Satoshi; Tsurumaru, Daisuke; Matsuura, Shuji; Hiraka, Kiyohisa; Nishie, Akihiro; Honda, Hiroshi

    2012-01-01

    Objective: We developed a new volume rendering technique, the CT gastrography wall carving image (WC) technique, which provides a clear visualization of localized enhanced tumors in the gastric wall. We evaluated the diagnostic performance of the WC as an adjunct to conventional images in detecting early gastric cancer (EGC). Materials and methods: Thirty-nine patients with 43 EGCs underwent contrast-enhanced MDCT gastrography for preoperative examination. Two observers independently reviewed the images under three different conditions: term 1, Axial CT; term 2, Axial CT, MPR and VE; and term 3, Axial CT, MPR, VE and WC for the detection of EGC. The accuracy of each condition as reviewed by each of the two observers was evaluated by receiver operating characteristic analysis. Interobserver agreement was calculated using weighted-κ statistics. Results: The best diagnostic performance and interobserver agreement were obtained in term 3. The AUCs of the two observers for terms 1, 2, and 3 were 0.63, 0.73, and 0.84, and 0.57, 0.73, and 0.76, respectively. The interobserver agreement improved from fair at term 1 to substantial at term 3. Conclusions: The addition of WC to conventional MDCT display improved the diagnostic accuracy and interobserver reproducibility for the detection of ECG. WC represents a suitable alternative for the visualization of localized enhanced tumors in the gastric wall.

  2. Method and system for rendering and interacting with an adaptable computing environment

    Science.gov (United States)

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  3. Light Field Rendering for Head Mounted Displays using Pixel Reprojection

    DEFF Research Database (Denmark)

    Hansen, Anne Juhler; Klein, Jákup; Kraus, Martin

    2017-01-01

    Light field displays have advantages over traditional stereoscopic head mounted displays, for example, because they can overcome the vergence-accommodation conflict. However, rendering light fields can be a heavy task for computers due to the number of images that have to be rendered. Since much ...

  4. Light Field Rendering for Head Mounted Displays using Pixel Reprojection

    DEFF Research Database (Denmark)

    Hansen, Anne Juhler; Klein, Jákup; Kraus, Martin

    2017-01-01

    of the information of the different images is redundant, we use pixel reprojection from the corner cameras to compute the remaining images in the light field. We compare the reprojected images with directly rendered images in a user test. In most cases, the users were unable to distinguish the images. In extreme...... cases, the reprojection approach is not capable of creating the light field. We conclude that pixel reprojection is a feasible method for rendering light fields as far as quality of perspective and diffuse shading is concerned, but render time needs to be reduced to make the method practical....

  5. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  6. Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2005-01-01

    We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large

  7. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    Science.gov (United States)

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance

  8. Volume-rendered hemorrhage-responsible arteriogram created by 64 multidetector-row CT during aortography: utility for catheterization in transcatheter arterial embolization for acute arterial bleeding.

    Science.gov (United States)

    Minamiguchi, Hiroki; Kawai, Nobuyuki; Sato, Morio; Ikoma, Akira; Sanda, Hiroki; Nakata, Kouhei; Tanaka, Fumihiro; Nakai, Motoki; Sonomura, Tetsuo; Murotani, Kazuhiro; Hosokawa, Seiki; Nishioku, Tadayoshi

    2014-01-01

    Aortography for detecting hemorrhage is limited when determining the catheter treatment strategy because the artery responsible for hemorrhage commonly overlaps organs and non-responsible arteries. Selective catheterization of untargeted arteries would result in repeated arteriography, large volumes of contrast medium, and extended time. A volume-rendered hemorrhage-responsible arteriogram created with 64 multidetector-row CT (64MDCT) during aortography (MDCTAo) can be used both for hemorrhage mapping and catheter navigation. The MDCTAo depicted hemorrhage in 61 of 71 cases of suspected acute arterial bleeding treated at our institute in the last 3 years. Complete hemostasis by embolization was achieved in all cases. The hemorrhage-responsible arteriogram was used for navigation during catheterization, thus assisting successful embolization. Hemorrhage was not visualized in the remaining 10 patients, of whom 6 had a pseudoaneurysm in a visceral artery; 1 with urinary bladder bleeding and 1 with chest wall hemorrhage had gaze tamponade; and 1 with urinary bladder hemorrhage and 1 with uterine hemorrhage had spastic arteries. Six patients with pseudoaneurysm underwent preventive embolization and the other 4 patients were managed by watchful observation. MDCTAo has the advantage of depicting the arteries responsible for hemoptysis, whether from the bronchial arteries or other systemic arteries, in a single scan. MDCTAo is particularly useful for identifying the source of acute arterial bleeding in the pancreatic arcade area, which is supplied by both the celiac and superior mesenteric arteries. In a case of pelvic hemorrhage, MDCTAo identified the responsible artery from among numerous overlapping visceral arteries that branched from the internal iliac arteries. In conclusion, a hemorrhage-responsible arteriogram created by 64MDCT immediately before catheterization is useful for deciding the catheter treatment strategy for acute arterial bleeding.

  9. Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects

    Science.gov (United States)

    Beddiaf, Ali; Babahenini, Mohamed Chaouki

    2018-03-01

    Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.

  10. High Fidelity Haptic Rendering

    CERN Document Server

    Otaduy, Miguel A

    2006-01-01

    The human haptic system, among all senses, provides unique and bidirectional communication between humans and their physical environment. Yet, to date, most human-computer interactive systems have focused primarily on the graphical rendering of visual information and, to a lesser extent, on the display of auditory information. Extending the frontier of visual computing, haptic interfaces, or force feedback devices, have the potential to increase the quality of human-computer interaction by accommodating the sense of touch. They provide an attractive augmentation to visual display and enhance t

  11. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  12. Fast rendering of scanned room geometries

    DEFF Research Database (Denmark)

    Olesen, Søren Krarup; Markovic, Milos; Hammershøi, Dorte

    2014-01-01

    Room acoustics are rendered in Virtual Realities based on models of the real world. These are typically rather coarse representations of the true geometry resulting in room impulse responses with a lack of natural detail. This problem can be overcome by using data scanned by sensors, such as e...

  13. On-the-fly generation and rendering of infinite cities on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene\\'s derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  14. On-the-fly generation and rendering of infinite cities on the GPU

    KAUST Repository

    Steinberger, Markus; Kenzel, Michael; Kainz, Bernhard K.; Wonka, Peter; Schmalstieg, Dieter

    2014-01-01

    In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  15. Facilitating the design of multidimensional and local transfer functions for volume visualization

    NARCIS (Netherlands)

    Sereda, P.

    2007-01-01

    The importance of volume visualization is increasing since the sizes of the datasets that need to be inspected grow with every new version of medical scanners (e.g., CT and MR). Direct volume rendering is a 3D visualization technique that has, in many cases, clear benefits over 2D views. It is able

  16. Antireflective sub-wavelength structures for improvement of the extraction efficiency and color rendering index of monolithic white light-emitting diode

    DEFF Research Database (Denmark)

    Ou, Yiyu; Corell, Dennis Dan; Dam-Hansen, Carsten

    2011-01-01

    We have theoretically investigated the influence of antireflective sub-wavelength structures on a monolithic white light-emitting diode (LED). The simulation is based on the rigorous coupled wave analysis (RCWA) algorithm, and both cylinder and moth-eye structures have been studied in the work. Our...... simulation results show that a moth-eye structure enhances the light extraction efficiency over the entire visible light range with an extraction efficiency enhancement of up to 26 %. Also for the first time to our best knowledge, the influence of sub-wavelength structures on both the color rendering index...

  17. approche algorithme génétique

    African Journals Online (AJOL)

    Structure / acentric factor relationship of alcohols and phenols: genetic ... descriptors of geometrical type selected by genetic algorithm, among more than 1600 ..... Practical handbook of genetic algorithms: Applications Volume I; CRC Press.

  18. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  19. Method of producing hydrogen, and rendering a contaminated biomass inert

    Science.gov (United States)

    Bingham, Dennis N [Idaho Falls, ID; Klingler, Kerry M [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID

    2010-02-23

    A method for rendering a contaminated biomass inert includes providing a first composition, providing a second composition, reacting the first and second compositions together to form an alkaline hydroxide, providing a contaminated biomass feedstock and reacting the alkaline hydroxide with the contaminated biomass feedstock to render the contaminated biomass feedstock inert and further producing hydrogen gas, and a byproduct that includes the first composition.

  20. A comparison of two dose calculation algorithms-anisotropic analytical algorithm and Acuros XB-for radiation therapy planning of canine intranasal tumors.

    Science.gov (United States)

    Nagata, Koichi; Pethel, Timothy D

    2017-07-01

    Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.

  1. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  2. Rendering Visible: Painting and Sexuate Subjectivity

    Science.gov (United States)

    Daley, Linda

    2015-01-01

    In this essay, I examine Luce Irigaray's aesthetic of sexual difference, which she develops by extrapolating from Paul Klee's idea that the role of painting is to render the non-visible rather than represent the visible. This idea is the premise of her analyses of phenomenology and psychoanalysis and their respective contributions to understanding…

  3. Influence of rendering methods on yield and quality of chicken fat recovered from broiler skin

    Directory of Open Access Journals (Sweden)

    Liang-Kun Lin

    2017-06-01

    Full Text Available Objective In order to utilize fat from broiler byproducts efficiently, it is necessary to develop an appropriate rendering procedure and establish quality information for the rendered fat. A study was therefore undertaken to evaluate the influence of rendering methods on the amounts and general properties of the fat recovered from broiler skin. Methods The yield and quality of the broiler skin fat rendered through high and lower energy microwave rendering (3.6 W/g for 10 min and 2.4 W/g for 10 min for high power microwave rendering (HPMR and high power microwave rendering (LPMR, respectively, oven baking (OB, at 180°C for 40 min, and water cooking (WC, boiling for 40 min were compared. Results Microwave-rendered skin exhibited the highest yields and fat recovery rates, followed by OB, and WC fats (p<0.05. HPMR fat had the highest L*, a*, and b* values, whereas WC fat had the highest moisture content, acid values, and thiobarbituric acid (TBA values (p<0.05. There was no significant difference in the acid value, peroxide value, and TBA values between HPMR and LPMR fats. Conclusion Microwave rendering at a power level of 3.6 W/g for 10 min is suggested base on the yield and quality of chicken fat.

  4. Beaming teaching application: recording techniques for spatial xylophone sound rendering

    DEFF Research Database (Denmark)

    Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup

    2012-01-01

    BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...... to spatial improvements mainly in terms of the Apparent Source Width (ASW). Rendered examples are subjectively evaluated in listening tests by comparing them with binaural recording....

  5. Chromium Renderserver: Scalable and Open Source Remote RenderingInfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Brian; Ahern, Sean; Bethel, E. Wes; Brugger, Eric; Cook,Rich; Daniel, Jamison; Lewis, Ken; Owen, Jens; Southard, Dale

    2007-12-01

    Chromium Renderserver (CRRS) is software infrastructure thatprovides the ability for one or more users to run and view image outputfrom unmodified, interactive OpenGL and X11 applications on a remote,parallel computational platform equipped with graphics hardwareaccelerators via industry-standard Layer 7 network protocolsand clientviewers. The new contributions of this work include a solution to theproblem of synchronizing X11 and OpenGL command streams, remote deliveryof parallel hardware-accelerated rendering, and a performance analysis ofseveral different optimizations that are generally applicable to avariety of rendering architectures. CRRSis fully operational, Open Sourcesoftware.

  6. SPAM-assisted partial volume correction algorithm for PET

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    2000-07-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38{+-}6%, while those of hippocampus and amygdala by 4{+-}3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images.

  7. SPAM-assisted partial volume correction algorithm for PET

    International Nuclear Information System (INIS)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul

    2000-01-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38±6%, while those of hippocampus and amygdala by 4±3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images

  8. 6. Algorithms for Sorting and Searching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Algorithms - Algorithms for Sorting and Searching. R K Shyamasundar. Series Article ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  9. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  10. Optimized Data Indexing Algorithms for OLAP Systems

    Directory of Open Access Journals (Sweden)

    Lucian BORNAZ

    2010-12-01

    Full Text Available The need to process and analyze large data volumes, as well as to convey the information contained therein to decision makers naturally led to the development of OLAP systems. Similarly to SGBDs, OLAP systems must ensure optimum access to the storage environment. Although there are several ways to optimize database systems, implementing a correct data indexing solution is the most effective and less costly. Thus, OLAP uses indexing algorithms for relational data and n-dimensional summarized data stored in cubes. Today database systems implement derived indexing algorithms based on well-known Tree, Bitmap and Hash indexing algorithms. This is because no indexing algorithm provides the best performance for any particular situation (type, structure, data volume, application. This paper presents a new n-dimensional cube indexing algorithm, derived from the well known B-Tree index, which indexes data stored in data warehouses taking in consideration their multi-dimensional nature and provides better performance in comparison to the already implemented Tree-like index types.

  11. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  12. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  13. A contrast-oriented algorithm for FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer: derivation from phantom measurements and validation in patient data

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Andrea; Hellwig, Dirk; Kirsch, Carl-Martin; Nestle, Ursula [Saarland University Medical Center, Department of Nuclear Medicine, Homburg (Germany); Kremp, Stephanie; Ruebe, Christian [Saarland University Medical Center, Department of Radiotherapy, Homburg (Germany)

    2008-11-15

    An easily applicable algorithm for the FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer was developed by phantom measurements and validated in patient data. PET scans were performed (ECAT-ART tomograph) on two cylindrical phantoms (phan1, phan2) containing glass spheres of different volumes (7.4-258 ml) which were filled with identical FDG concentrations. Gradually increasing the activity of the fillable background, signal-to-background ratios from 33:1 to 2.5:1 were realised. The mean standardised uptake value (SUV) of the region-of-interest (ROI) surrounded by a 70% isocontour (mSUV{sub 70}) was used to represent the FDG accumulation of each sphere (or tumour). Image contrast was defined as: C=(mSUV{sub 70}-BG)/BG wehre BG is the mean background - SUV. For the spheres of phan1, the threshold SUVs (TS) best matching the known sphere volumes were determined. A regression function representing the relationship between TS/(mSUV{sub 70}-BG) and C was calculated and used for delineation of the spheres in phan2 and the gross tumour volumes (GTVs) of eight primary lung tumours. These GTVs were compared to those defined using CT. The relationship between TS/(mSUV{sub 70}-BG) and C is best described by an inverse regression function which can be converted to the linear relationship TS=a x mSUV{sub 70}+b x BG. Using this algorithm, the volumes delineated in phan2 differed by only -0.4 to +0.7 mm in radius from the true ones, whilst the PET-GTVs differed by only -0.7 to +1.2 mm compared with the values determined by CT. By the contrast-oriented algorithm presented in this study, a PET-based delineation of GTVs for primary tumours of lung cancer patients is feasible. (orig.)

  14. Evaluation and Improvement of the CIE Metameric and Colour Rendering Index

    Directory of Open Access Journals (Sweden)

    Radovan Slavuj

    2015-12-01

    Full Text Available All artificial light sources are intended to simulate daylight and its properties of color rendering or ability of colour discrimination. Two indices, defined by the CIE, are used to quantify quality of the artificial light sources. First is Color Rendering Index which quantifies ability of light sources to render colours and other is the Metemerism Index which describes metamerism potential of given light source. Calculation of both indices are defined by CIE and has been a subject of discussion and change in past. In this work particularly, the problem of sample number and type used in calculation is addressed here and evaluated. It is noticed that both indices depends on the choice and sample number and that they should be determined based on application.

  15. Automated planning volume definition in soft-tissue sarcoma adjuvant brachytherapy

    International Nuclear Information System (INIS)

    Lee, Eva K.; Fung, Albert Y.C.; Zaider, Marco; Brooks, J. Paul

    2002-01-01

    In current practice, the planning volume for adjuvant brachytherapy treatment for soft-tissue sarcoma is either not determined a priori (in this case, seed locations are selected based on isodose curves conforming to a visual estimate of the planning volume), or it is derived via a tedious manual process. In either case, the process is subjective and time consuming, and is highly dependent on the human planner. The focus of the work described herein involves the development of an automated contouring algorithm to outline the planning volume. Such an automatic procedure will save time and provide a consistent and objective method for determining planning volumes. In addition, a definitive representation of the planning volume will allow for sophisticated brachytherapy treatment planning approaches to be applied when designing treatment plans, so as to maximize local tumour control and minimize normal tissue complications. An automated tumour volume contouring algorithm is developed utilizing computational geometry and numerical interpolation techniques in conjunction with an artificial intelligence method. The target volume is defined to be the slab of tissue r cm perpendicularly away from the curvilinear plane defined by the mesh of catheters. We assume that if adjacent catheters are over 2r cm apart, the tissue between the two catheters is part of the tumour bed. Input data consist of the digitized coordinates of the catheter positions in each of several cross-sectional slices of the tumour bed, and the estimated distance r from the catheters to the tumour surface. Mathematically, one can view the planning volume as the volume enclosed within a minimal smoothly-connected surface which contains a set of circles, each circle centred at a given catheter position in a given cross-sectional slice. The algorithm performs local interpolation on consecutive triplets of circles. The effectiveness of the algorithm is evaluated based on its performance on a collection of

  16. Automated planning volume definition in soft-tissue sarcoma adjuvant brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eva K. [Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA (United States); School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA (United States); Fung, Albert Y.C.; Zaider, Marco [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY (United States); Brooks, J. Paul [School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2002-06-07

    In current practice, the planning volume for adjuvant brachytherapy treatment for soft-tissue sarcoma is either not determined a priori (in this case, seed locations are selected based on isodose curves conforming to a visual estimate of the planning volume), or it is derived via a tedious manual process. In either case, the process is subjective and time consuming, and is highly dependent on the human planner. The focus of the work described herein involves the development of an automated contouring algorithm to outline the planning volume. Such an automatic procedure will save time and provide a consistent and objective method for determining planning volumes. In addition, a definitive representation of the planning volume will allow for sophisticated brachytherapy treatment planning approaches to be applied when designing treatment plans, so as to maximize local tumour control and minimize normal tissue complications. An automated tumour volume contouring algorithm is developed utilizing computational geometry and numerical interpolation techniques in conjunction with an artificial intelligence method. The target volume is defined to be the slab of tissue r cm perpendicularly away from the curvilinear plane defined by the mesh of catheters. We assume that if adjacent catheters are over 2r cm apart, the tissue between the two catheters is part of the tumour bed. Input data consist of the digitized coordinates of the catheter positions in each of several cross-sectional slices of the tumour bed, and the estimated distance r from the catheters to the tumour surface. Mathematically, one can view the planning volume as the volume enclosed within a minimal smoothly-connected surface which contains a set of circles, each circle centred at a given catheter position in a given cross-sectional slice. The algorithm performs local interpolation on consecutive triplets of circles. The effectiveness of the algorithm is evaluated based on its performance on a collection of

  17. Experiencing "Macbeth": From Text Rendering to Multicultural Performance.

    Science.gov (United States)

    Reisin, Gail

    1993-01-01

    Shows how one teacher used innovative methods in teaching William Shakespeare's "Macbeth." Outlines student assignments including text renderings, rewriting a scene from the play, and creating a multicultural scrapbook for the play. (HB)

  18. 基于矩形分割的局部渲染技术在无线图像通信中的应用%Application of Rectangle Incision Based on Part Render Technology in Wireless Picture Communication

    Institute of Scientific and Technical Information of China (English)

    刘德胜

    2012-01-01

    为了提升无线图像通信中的图像渲染性能和减小数据的传输量,提出了一种优化算法,将基于矩形分割的局部渲染技术引入到无线图像通信中,以此减小每一帧的渲染区域和传输数据,以达到在节约CPU计算资源的同时,降低电量消耗和带宽依赖的目的.通过实验发现,该算法在图像相对变化较小的时候,最多能将渲染性能提升一倍,同时传输数据量和需要重新渲染的单个对象数量基本呈正比.实验结果证明,该算法有其适用范围,当图像较稳定的时候,平均能够提升30%以上计算性能,并减少50%以上的数据传输.%For optimizing the performance of picture render and reduce the size of transfer data, this paper proposed an algorithm which can cut down the render area and the size of data to be transferred by part rendering. It is expected that this algoritms can save computational resources and reduce consumption of electricity and bandwidth at the same time. According to the experiment, this algorithm can optimize the render performance up to 100 percent when picture is not changing too much and the size of transferred data is directly proportional to the number of objects need to be re-rendered. This result proved that when the picture is relatively stable this algorithm can speed up the renderi performance by at least 30% and reduce 50% transferred data in the common cases.

  19. 3D-shaded surface rendering of gadolinium-enhanced MR angiography in congenital heart disease

    International Nuclear Information System (INIS)

    Okuda, S.; Kikinis, R.; Dumanli, H.; Geva, T.; Powell, A.J.; Chung, T.

    2000-01-01

    Background. Gadolinium-enhanced three-dimensional (3D) MR angiography is a useful imaging technique for patients with congenital heart disease. Objective. This study sought to determine the added value of creating 3D shaded surface displays compared to standard maximal intensity projection (MIP) and multiplanar reformatting (MPR) techniques when analyzing 3D MR angiography data. Materials and methods. Seventeen patients (range, 3 months to 51 years old) with a variety of congenital cardiovascular defects underwent gadolinium-enhanced 3D MR angiography of the thorax. Color-coded 3D shaded surface models were rendered from the image data using manual segmentation and computer-based algorithms. Models could be rotated, translocated, or zoomed interactively by the viewer. Information available from the 3D models was compared to analysis based on viewing standard MIP/MPR displays. Results. Median postprocessing time for the 3D models was 6 h (range, 3-25 h) compared to approximately 20 min for MIP/MPR viewing. No additional diagnostic information was gained from 3D model analysis. All major findings with MIP/MPR postprocessing were also apparent on the 3D models. Qualitatively, the 3D models were more easily interpreted and enabled adjacent vessels to be distinguished more readily. Conclusion. Routine use of 3D shaded surface reconstructions for visualization of contrast enhanced MR angiography in congenital heart disease cannot be recommended. 3D surface rendering may be more useful for presenting complex anatomy to an audience unfamiliar with congenital heart disease and as an educational tool. (orig.)

  20. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  1. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    Science.gov (United States)

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  2. SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Martin, R; Pan, T [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signal and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.

  3. Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp

    International Nuclear Information System (INIS)

    Bayr, S.; Ojanperä, M.; Kaparaju, P.; Rintala, J.

    2014-01-01

    Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m 3 d, HRT of 50 d was unstable in mono-digestion. • Free NH 3 inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m 3 d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55 °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH 4 -N and/or free NH 3 ) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m 3 d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm 3 /kg VS fed . On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm 3 /kg VS fed ). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials

  4. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  5. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  6. 31 CFR 515.548 - Services rendered by Cuba to United States aircraft.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Specific licenses are issued for payment to Cuba of charges for services...

  7. COMPANION ANIMALS SYMPOSIUM: Rendered ingredients significantly influence sustainability, quality, and safety of pet food.

    Science.gov (United States)

    Meeker, D L; Meisinger, J L

    2015-03-01

    The rendering industry collects and safely processes approximately 25 million t of animal byproducts each year in the United States. Rendering plants process a variety of raw materials from food animal production, principally offal from slaughterhouses, but include whole animals that die on farms or in transit and other materials such as bone, feathers, and blood. By recycling these byproducts into various protein, fat, and mineral products, including meat and bone meal, hydrolyzed feather meal, blood meal, and various types of animal fats and greases, the sustainability of food animal production is greatly enhanced. The rendering industry is conscious of its role in the prevention of disease and microbiological control and providing safe feed ingredients for livestock, poultry, aquaculture, and pets. The processing of otherwise low-value OM from the livestock production and meat processing industries through rendering drastically reduces the amount of waste. If not rendered, biological materials would be deposited in landfills, burned, buried, or inappropriately dumped with large amounts of carbon dioxide, ammonia, and other compounds polluting air and water. The majority of rendered protein products are used as animal feed. Rendered products are especially valuable to the livestock and pet food industries because of their high protein content, digestible AA levels (especially lysine), mineral availability (especially calcium and phosphorous), and relatively low cost in relation to their nutrient value. The use of these reclaimed and recycled materials in pet food is a much more sustainable model than using human food for pets.

  8. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  9. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  10. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  11. Method and apparatus for imaging volume data

    International Nuclear Information System (INIS)

    Drebin, R.; Carpenter, L.C.

    1987-01-01

    An imaging system projects a two dimensional representation of three dimensional volumes where surface boundaries and objects internal to the volumes are readily shown, and hidden surfaces and the surface boundaries themselves are accurately rendered by determining volume elements or voxels. An image volume representing a volume object or data structure is written into memory. A color and opacity is assigned to each voxel within the volume and stored as a red (R), green (G), blue (B), and opacity (A) component, three dimensional data volume. The RGBA assignment for each voxel is determined based on the percentage component composition of the materials represented in the volume, and thus, the percentage of color and transparency associated with those materials. The voxels in the RGBA volume are used as mathematical filters such that each successive voxel filter is overlayed over a prior background voxel filter. Through a linear interpolation, a new background filter is determined and generated. The interpolation is successively performed for all voxels up to the front most voxel for the plane of view. The method is repeated until all display voxels are determined for the plane of view. (author)

  12. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    Science.gov (United States)

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  13. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    Science.gov (United States)

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  14. Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp

    Energy Technology Data Exchange (ETDEWEB)

    Bayr, S., E-mail: suvi.bayr@jyu.fi; Ojanperä, M.; Kaparaju, P.; Rintala, J.

    2014-10-15

    Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was unstable in mono-digestion. • Free NH{sub 3} inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55 °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH{sub 4}-N and/or free NH{sub 3}) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m{sup 3} d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm{sup 3}/kg VS{sub fed}. On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm{sup 3}/kg VS{sub fed}). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials.

  15. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  16. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

  17. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  18. Bench test evaluation of volume delivered by modern ICU ventilators during volume-controlled ventilation.

    Science.gov (United States)

    Lyazidi, Aissam; Thille, Arnaud W; Carteaux, Guillaume; Galia, Fabrice; Brochard, Laurent; Richard, Jean-Christophe M

    2010-12-01

    During volume-controlled ventilation, part of the volume delivered is compressed into the circuit. To correct for this phenomenon, modern ventilators use compensation algorithms. Humidity and temperature also influence the delivered volume. In a bench study at a research laboratory in a university hospital, we compared nine ICU ventilators equipped with compensation algorithms, one with a proximal pneumotachograph and one without compensation. Each ventilator was evaluated under normal, obstructive, and restrictive conditions of respiratory mechanics. For each condition, three tidal volumes (V (T)) were set (300, 500, and 800 ml), with and without an inspiratory pause. The insufflated volume and the volume delivered at the Y-piece were measured independently, without a humidification device, under ambient temperature and pressure and dry gas conditions. We computed the actually delivered V (T) to the lung under body temperature and pressure and saturated water vapour conditions (BTPS). For target V (T) values of 300, 500, and 800 ml, actually delivered V (T) under BTPS conditions ranged from 261 to 396 ml (-13 to +32%), from 437 to 622 ml (-13 to +24%), and from 681 to 953 ml (-15 to +19%), respectively (p ventilators.

  19. Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Science.gov (United States)

    Friston, Sebastian; Steed, Anthony; Tilbury, Simon; Gaydadjiev, Georgi

    2016-04-01

    Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.

  20. CT two-dimensional reformation versus three-dimensional volume rendering with regard to surgical findings in the preoperative assessment of the ossicular chain in chronic suppurative otitis media

    International Nuclear Information System (INIS)

    Guo, Yong; Liu, Yang; Lu, Qiao-hui; Zheng, Kui-hong; Shi, Li-jing; Wang, Qing-jun

    2013-01-01

    Purpose: To assess the role of three-dimensional volume rendering (3DVR) in the preoperative assessment of the ossicular chain in chronic suppurative otitis media (CSOM). Materials and methods: Sixty-six patients with CSOM were included in this prospective study. Temporal bone was scanned with a 128-channel multidetector row CT and the axial data was transferred to the workstation for multiplanar reformation (MPR) and 3DVR reconstructions. Evaluation of the ossicular chain according to a three-point scoring system on two-dimensional reformation (2D) and 3DVR was performed independently by two radiologists. The evaluation results were compared with surgical findings. Results: 2D showed over 89% accuracy in the assessment of segmental absence of the ossicular chain in CSOM, no matter how small the segmental size was. 3DVR was as accurate as 2D for the assessment of segmental absence. However, 3DVR was found to be more accurate than 2D in the evaluation of partial erosion of segments. Conclusion: Both 3DVR and 2D are accurate and reliable for the assessment of the ossicular chain in CSOM. The inclusion of 3DVR images in the imaging protocol improves the accuracy of 2D in detecting ossicular erosion from CSOM

  1. CT two-dimensional reformation versus three-dimensional volume rendering with regard to surgical findings in the preoperative assessment of the ossicular chain in chronic suppurative otitis media

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yong, E-mail: guoyong27@hotmail.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Liu, Yang, E-mail: liuyangdoc@sina.com [Department of Otorhinolaryngology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Lu, Qiao-hui, E-mail: Luqiaohui465@126.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Zheng, Kui-hong, E-mail: zhengkuihong1971@sina.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Shi, Li-jing, E-mail: Shilijing2003@yahoo.com.cn [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Wang, Qing-jun, E-mail: wangqingjun77@163.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China)

    2013-09-15

    Purpose: To assess the role of three-dimensional volume rendering (3DVR) in the preoperative assessment of the ossicular chain in chronic suppurative otitis media (CSOM). Materials and methods: Sixty-six patients with CSOM were included in this prospective study. Temporal bone was scanned with a 128-channel multidetector row CT and the axial data was transferred to the workstation for multiplanar reformation (MPR) and 3DVR reconstructions. Evaluation of the ossicular chain according to a three-point scoring system on two-dimensional reformation (2D) and 3DVR was performed independently by two radiologists. The evaluation results were compared with surgical findings. Results: 2D showed over 89% accuracy in the assessment of segmental absence of the ossicular chain in CSOM, no matter how small the segmental size was. 3DVR was as accurate as 2D for the assessment of segmental absence. However, 3DVR was found to be more accurate than 2D in the evaluation of partial erosion of segments. Conclusion: Both 3DVR and 2D are accurate and reliable for the assessment of the ossicular chain in CSOM. The inclusion of 3DVR images in the imaging protocol improves the accuracy of 2D in detecting ossicular erosion from CSOM.

  2. The lime renderings from plaza de la Corredera, Córdoba

    Directory of Open Access Journals (Sweden)

    González, T.

    2002-09-01

    Full Text Available The causes of the pathologies found on the lime renderings from Plaza de la Corredera façades are analysed in this study. For this purpose, the mineralogical and chemical analyses of the building materials -brickwork and rendering mortar- has been carried out, as well as their physical, hydric and mechanical properties have been determined. The obtained results from both unaltered and altered materials, and the analysis of the rendering's raw materials, have allowed us to establish that rendering deterioration is connected to the presence of saline compounds (gypsum, halite, which existing in the brickwork substratum, have been removed due to the water saturation of such brickwork. The main cause responsible of the alteration forms - efflorescence, crusts, grain-disintegration, bulging, flaking found on the renderings, has been the salts precipitation (halite, hexahydrite, epsomite in their way towards the external surface.

    En este estudio se analizan las causas de las patologías de los revocos de cal de las fachadas de la Plaza de la Corredera. Para ello se ha realizado el análisis mineralógico y químico de los materiales de construcción - fábrica de ladrillo y mortero de revestimiento- y se han determinado sus propiedades físicas, hídricas y mecánicas. Mediante la comparación de los resultados obtenidos en los materiales inalterados y en los alterados, así como una vez analizadas las materias primas utilizadas en la realización del revoco, se ha podido establecer que la alteración de este último está relacionada con la existencia de compuestos salinos (yeso, halita que, presentes en el substrato de fábrica de ladrillo, se han exudado por saturación de agua de la misma. La precipitación de las sales (halita, hexahidrita, epsomita en su migración hacia el exterior ha sido la principal responsable de las formas de alteración -eflorescencias, costras, arenización, abombamientos, descamaciones- que aparecen sobre los

  3. Modern Algorithms for Real-Time Terrain Visualization on Commodity Hardware

    Directory of Open Access Journals (Sweden)

    Radek Bartoň

    2011-05-01

    Full Text Available The amount of input data acquired from a remote sensing equipment is rapidly growing.  Interactive visualization of those datasets is a necessity for their correct interpretation. With the ability of modern hardware to display hundreds of millions of triangles per second, it is possible to visualize the massive terrains at one pixel display error on HD displays with interactive frame rates when batched rendering is applied. Algorithms able to do this are an area of intensive research and a topic of this article. The paper first explains some of the theory around the terrain visualization, categorizes its algorithms according to several criteria and describes six of the most significant methods in more details.

  4. Geometric optimization of thermoelectric coolers in a confined volume using genetic algorithms

    International Nuclear Information System (INIS)

    Cheng, Y.-H.; Lin, W.-K.

    2005-01-01

    The demand for thermoelectric coolers (TEC) has grown significantly because of the need for a steady, low-temperature operating environment for various electronic devices such as laser diodes, semiconductor equipment, infrared detectors and others. The cooling capacity and its coefficient of performance (COP) are both extremely important in considering applications. Optimizing the dimensions of the TEC legs provides the advantage of increasing the cooling capacity, while simultaneously considering its minimum COP. This study proposed a method of optimizing the dimensions of the TEC legs using genetic algorithms (GAs), to maximize the cooling capacity. A confined volume in which the TEC can be placed and the technological limitation in manufacturing a TEC leg were considered, and three parameters - leg length, leg area and the number of legs - were taken as the variables to be optimized. The constraints of minimum COP and maximum cost of the material were set, and a genetic search was performed to determine the optimal dimensions of the TEC legs. This work reveals that optimizing the dimensions of the TEC can increase its cooling capacity. The results also show that GAs can determine the optimal dimensions according to various input currents and various cold-side operating temperatures

  5. Modern algorithms for large sparse eigenvalue problems

    International Nuclear Information System (INIS)

    Meyer, A.

    1987-01-01

    The volume is written for mathematicians interested in (numerical) linear algebra and in the solution of large sparse eigenvalue problems, as well as for specialists in engineering, who use the considered algorithms in the investigation of eigenoscillations of structures, in reactor physics, etc. Some variants of the algorithms based on the idea of a gradient-type direction of movement are presented and their convergence properties are discussed. From this, a general strategy for the direct use of preconditionings for the eigenvalue problem is derived. In this new approach the necessity of the solution of large linear systems is entirely avoided. Hence, these methods represent a new alternative to some other modern eigenvalue algorithms, as they show a slightly slower convergence on the one hand but essentially lower numerical and data processing problems on the other hand. A brief description and comparison of some well-known methods (i.e. simultaneous iteration, Lanczos algorithm) completes this volume. (author)

  6. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  7. Insurance of professional responsibility at medical aid rendering

    Directory of Open Access Journals (Sweden)

    Abyzova N.V.

    2011-12-01

    Full Text Available The article discusses the necessity of adoption of professional responsibility insurance act into the public health service. It is considered as the basic mechanism of compensation in case of damage to a patient at medical aid rendering

  8. Invariance algorithms for processing NDE signals

    Science.gov (United States)

    Mandayam, Shreekanth; Udpa, Lalita; Udpa, Satish S.; Lord, William

    1996-11-01

    Signals that are obtained in a variety of nondestructive evaluation (NDE) processes capture information not only about the characteristics of the flaw, but also reflect variations in the specimen's material properties. Such signal changes may be viewed as anomalies that could obscure defect related information. An example of this situation occurs during in-line inspection of gas transmission pipelines. The magnetic flux leakage (MFL) method is used to conduct noninvasive measurements of the integrity of the pipe-wall. The MFL signals contain information both about the permeability of the pipe-wall and the dimensions of the flaw. Similar operational effects can be found in other NDE processes. This paper presents algorithms to render NDE signals invariant to selected test parameters, while retaining defect related information. Wavelet transform based neural network techniques are employed to develop the invariance algorithms. The invariance transformation is shown to be a necessary pre-processing step for subsequent defect characterization and visualization schemes. Results demonstrating the successful application of the method are presented.

  9. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  10. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  11. Reflection curves—new computation and rendering techniques

    Directory of Open Access Journals (Sweden)

    Dan-Eugen Ulmet

    2004-05-01

    Full Text Available Reflection curves on surfaces are important tools for free-form surface interrogation. They are essential for industrial 3D CAD/CAM systems and for rendering purposes. In this note, new approaches regarding the computation and rendering of reflection curves on surfaces are introduced. These approaches are designed to take the advantage of the graphics libraries of recent releases of commercial systems such as the OpenInventor toolkit (developed by Silicon Graphics or Matlab (developed by The Math Works. A new relation between reflection curves and contour curves is derived; this theoretical result is used for a straightforward Matlab implementation of reflection curves. A new type of reflection curves is also generated using the OpenInventor texture and environment mapping implementations. This allows the computation, rendering, and animation of reflection curves at interactive rates, which makes it particularly useful for industrial applications.

  12. Cement-Based Renders Manufactured with Phase-Change Materials: Applications and Feasibility

    Directory of Open Access Journals (Sweden)

    Luigi Coppola

    2016-01-01

    Full Text Available The paper focuses on the evaluation of the rheological and mechanical performances of cement-based renders manufactured with phase-change materials (PCM in form of microencapsulated paraffin for innovative and ecofriendly residential buildings. Specifically, cement-based renders were manufactured by incorporating different amount of paraffin microcapsules—ranging from 5% to 20% by weight with respect to binder. Specific mass, entrained or entrapped air, and setting time were evaluated on fresh mortars. Compressive strength was measured over time to evaluate the effect of the PCM addition on the hydration kinetics of cement. Drying shrinkage was also evaluated. Experimental results confirmed that the compressive strength decreases as the amount of PCM increases. Furthermore, the higher the PCM content, the higher the drying shrinkage. The results confirm the possibility of manufacturing cement-based renders containing up to 20% by weight of PCM microcapsules with respect to binder.

  13. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    International Nuclear Information System (INIS)

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-01-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  14. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  15. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    Science.gov (United States)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  16. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  17. Rendering Falling Leaves on Graphics Hardware

    OpenAIRE

    Marcos Balsa; Pere-Pau Vázquez

    2008-01-01

    There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount...

  18. VIDEO ANIMASI 3D PENGENALAN RUMAH ADAT DAN ALAT MUSIK KEPRI DENGAN MENGUNAKAN TEKNIK RENDER CEL-SHADING

    Directory of Open Access Journals (Sweden)

    Jianfranco Irfian Asnawi

    2016-11-01

    Full Text Available Animasi ini berjudul "video animasi 3D rumah adat dan alat musik Kepulauan Riau dengan menggunakan teknik render cel-shading" merupakan video yang bertujuan memperkenalkan alat-alat musik yang berasal dari kepulauan riau, Animasi ini akan diterapkan dengan menggunakan teknik render cel-shading. Cel-shading adalah teknik render yang menampilkan grafik 3D yang menyerupai gambar tangan, seperti gambar komik dan kartun. Teknik ini juga sudah di terapkan dalam game 3D yang ternyata menarik banyak perhatian peminat. Teknik ini akan di terapkan kedalam animasi 3D "video animasi rumah adat dan alat musik kepulauan riau dengan menggunakan teknik render cel-shading" Animasi di rancang menggunakan skenario dan storyboard kemudian di implementasikan dalam software 3D MAYA AUTODESK dengan menggunakan teknik render cel-shading. Setelah diterapkan maka di dapatkan definisi keberhasilan dari teknik render cel shading di bandingkan dengan teknik render global illumination seperti dari kecepatan dalam merender dan tingkat kecerahan warna pada video. Kata kunci: animasi, game 3D, cel-shading.

  19. Specification and time required for the application of a lime-based render inside historic buildings

    Directory of Open Access Journals (Sweden)

    Vasco Peixoto de Freitas

    2008-01-01

    Full Text Available Intervention in ancient buildings with historical and architectural value requires traditional techniques, such as the use of lime mortars for internal and external wall renderings. In order to ensure the desired performance, these rendering mortars must be rigorously specified and quality controls have to be performed during application. The choice of mortar composition should take account of factors such as compatibility with the substrate, mechanical requirements and water behaviour. The construction schedule, which used to be considered a second order variable, nowadays plays a decisive role in the selection of the rendering technique, given its effects upon costs. How should lime-based mortars be specified? How much time is required for the application and curing of a lime-based render? This paper reflects upon the feasibility of using traditional lime mortars in three-layer renders inside churches and monasteries under adverse hygrothermal conditions and when time is critical. A case study is presented in which internal lime mortar renderings were applied in a church in Northern Portugal, where the very high relative humidity meant that several months were necessary before the drying process was complete.

  20. ARE: Ada Rendering Engine

    Directory of Open Access Journals (Sweden)

    Stefano Penge

    2009-10-01

    Full Text Available E' ormai pratica diffusa, nello sviluppo di applicazioni web, l'utilizzo di template e di potenti template engine per automatizzare la generazione dei contenuti da presentare all'utente. Tuttavia a volte la potenza di tali engine è€ ottenuta mescolando logica e interfaccia, introducendo linguaggi diversi da quelli di descrizione della pagina, o addirittura inventando nuovi linguaggi dedicati.ARE (ADA Rendering Engine è€ pensato per gestire l'intero flusso di creazione del contenuto HTML/XHTML dinamico, la selezione del corretto template, CSS, JavaScript e la produzione dell'output separando completamente logica e interfaccia. I templates utilizzati sono puro HTML senza parti in altri linguaggi, e possono quindi essere gestiti e visualizzati autonomamente. Il codice HTML generato è€ uniforme e parametrizzato.E' composto da due moduli, CORE (Common Output Rendering Engine e ALE (ADA Layout Engine.Il primo (CORE viene utilizzato per la generazione OO degli elementi del DOM ed è pensato per aiutare lo sviluppatore nella produzione di codice valido rispetto al DTD utilizzato. CORE genera automaticamente gli elementi del DOM in base al DTD impostato nella configurazioneIl secondo (ALE viene utilizzato come template engine per selezionare automaticamente in base ad alcuni parametri (modulo, profilo utente, tipologia del nodo, del corso, preferenze di installazione il template HTML, i CSS e i file JavaScript appropriati. ALE permette di usare templates di default e microtemplates ricorsivi per semplificare il lavoro del grafico.I due moduli possono in ogni caso essere utilizzati indipendentemente l'uno dall'altro. E' possibile generare e renderizzare una pagina HTML utilizzando solo CORE oppure inviare gli oggetti CORE al template engine ALE che provvede a renderizzare la pagina HTML. Viceversa è possibile generare HTML senza utilizzare CORE ed inviarlo al template engine ALECORE è alla prima release ed è€ già utilizzato all

  1. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  2. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  3. An image-space parallel convolution filtering algorithm based on shadow map

    Science.gov (United States)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  4. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  5. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  6. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  7. Tactile display for virtual 3D shape rendering

    CERN Document Server

    Mansutti, Alessandro; Bordegoni, Monica; Cugini, Umberto

    2017-01-01

    This book describes a novel system for the simultaneous visual and tactile rendering of product shapes which allows designers to simultaneously touch and see new product shapes during the conceptual phase of product development. This system offers important advantages, including potential cost and time savings, compared with the standard product design process in which digital 3D models and physical prototypes are often repeatedly modified until an optimal design is achieved. The system consists of a tactile display that is able to represent, within a real environment, the shape of a product. Designers can explore the rendered surface by touching curves lying on the product shape, selecting those curves that can be considered style features and evaluating their aesthetic quality. In order to physically represent these selected curves, a flexible surface is modeled by means of servo-actuated modules controlling a physical deforming strip. The tactile display is designed so as to be portable, low cost, modular,...

  8. Differentiating aneurysm from infundibular dilatation by volume rendering MRA. Techniques for improving depiction of the posterior communicating and anterior choroidal arteries

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takaaki; Ito, Takeo; Hasunuma, Masahiro; Sakamoto, Yasuo; Kohama, Ikuhide; Yonemori, Terutake; Izumo, Masaki [Hakodate Shintoshi Hospital, Hokkaido (Japan)

    2002-12-01

    With the spread of brain dock procedures, non-invasive magnetic resonance angiography (MRA) is being utilized to broadly screen for brain blood vessel diseases. However, diagnosis of cerebral aneurysm can be difficult by routine MRA. In particular, differentiating aneurysms and infundibular dilatations (IDS) of the posterior communicating artery (PCoA) and anterior choroidal artery (AChA) at their bifurcations with the internal carotid artery (ICA) is extremely difficult and additional studies are frequently necessary. In this situation, three-dimensional computed tomography angiography (3D-CTA) and cerebral angiography have been utilized, but both techniques are invasive. Furthermore, images from cerebral angiography are only two-dimensional, and 3D-CTA requires differentiation between aneurysm and ID by observing configurational changes at the apex of the protrusion and by following gradual changes to the threshold. We therefore undertook the following steps to improve both depiction of the PCoA and AChA and differential diagnosis between aneurysm and ID: reduced slice thickness and increased number of excitations; utilized volume rendering methods to construct images; lowered thresholds for the beginning of the PCoA and AChA arteries, which represent the regions of interest. In all 11 cases that we operated on, cerebral aneurysms were diagnosed correctly and the minimum neck diameter of the cerebral aneurysm was 1.2 mm. In addition, the number of AChAs and PCoAs present in target MRA and in operational views were evaluated. In one case with an AChA aneurysm, a PCoA was not detected by target MRA, because the ICA deviated posterolaterally and pushed the PCoA to the posterior clinoid process, and blood flow was poor in operational views. In another 2 cases with AChA aneurysms, only one AChA was described in target MRA, whereas two aneurysms were present. However, one of these had a diameter less than 1 mm. In conclusion, this method offers an extremely useful aid

  9. Differentiating aneurysm from infundibular dilatation by volume rendering MRA. Techniques for improving depiction of the posterior communicating and anterior choroidal arteries

    International Nuclear Information System (INIS)

    Kato, Takaaki; Ito, Takeo; Hasunuma, Masahiro; Sakamoto, Yasuo; Kohama, Ikuhide; Yonemori, Terutake; Izumo, Masaki

    2002-01-01

    With the spread of brain dock procedures, non-invasive magnetic resonance angiography (MRA) is being utilized to broadly screen for brain blood vessel diseases. However, diagnosis of cerebral aneurysm can be difficult by routine MRA. In particular, differentiating aneurysms and infundibular dilatations (IDS) of the posterior communicating artery (PCoA) and anterior choroidal artery (AChA) at their bifurcations with the internal carotid artery (ICA) is extremely difficult and additional studies are frequently necessary. In this situation, three-dimensional computed tomography angiography (3D-CTA) and cerebral angiography have been utilized, but both techniques are invasive. Furthermore, images from cerebral angiography are only two-dimensional, and 3D-CTA requires differentiation between aneurysm and ID by observing configurational changes at the apex of the protrusion and by following gradual changes to the threshold. We therefore undertook the following steps to improve both depiction of the PCoA and AChA and differential diagnosis between aneurysm and ID: reduced slice thickness and increased number of excitations; utilized volume rendering methods to construct images; lowered thresholds for the beginning of the PCoA and AChA arteries, which represent the regions of interest. In all 11 cases that we operated on, cerebral aneurysms were diagnosed correctly and the minimum neck diameter of the cerebral aneurysm was 1.2 mm. In addition, the number of AChAs and PCoAs present in target MRA and in operational views were evaluated. In one case with an AChA aneurysm, a PCoA was not detected by target MRA, because the ICA deviated posterolaterally and pushed the PCoA to the posterior clinoid process, and blood flow was poor in operational views. In another 2 cases with AChA aneurysms, only one AChA was described in target MRA, whereas two aneurysms were present. However, one of these had a diameter less than 1 mm. In conclusion, this method offers an extremely useful aid

  10. Parametric model of the scala tympani for haptic-rendered cochlear implantation.

    Science.gov (United States)

    Todd, Catherine; Naghdy, Fazel

    2005-01-01

    A parametric model of the human scala tympani has been designed for use in a haptic-rendered computer simulation of cochlear implant surgery. It will be the first surgical simulator of this kind. A geometric model of the Scala Tympani has been derived from measured data for this purpose. The model is compared with two existing descriptions of the cochlear spiral. A first approximation of the basilar membrane is also produced. The structures are imported into a force-rendering software application for system development.

  11. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  12. Radionuclide cisternography: SPECT and 3D-rendering

    International Nuclear Information System (INIS)

    Henkes, H.; Huber, G.; Piepgras, U.; Hierholzer, J.; Cordes, M.

    1991-01-01

    Radionuclide cisternography is indicated in the clinical work-up for hydrocephalus, when searching for CSF leaks, and when testing whether or not intracranial cystic lesions are communicating with the adjacent subarachnoid space. This paper demonstrates the feasibility and diagnostic value of SPECT and subsequent 3D surface rendering in addition to conventional rectilinear CSF imaging in eight patients. Planar images allowed the evaluation of CSF circulation and the detection of CSF fistula. They were advantageous in examinations 48 h after application of 111 In-DTPA. SPECT scans, generated 4-24 h after tracer application, were superior in the delineation of basal cisterns, especially in early scans; this was helpful in patients with pooling due to CSF fistula and in cystic lesions near the skull base. A major drawback was the limited image quality of delayed scans, when the SPECT data were degraded by a low count rate. 3D surface rendering was easily feasible from SPECT data and yielded high quality images. The presentation of the spatial distribution of nuclide-contaminated CSF proved especially helpful in the area of the basal cisterns. (orig.) [de

  13. Factors affecting extension workers in their rendering of effective ...

    African Journals Online (AJOL)

    Factors affecting extension workers in their rendering of effective service to pre and ... Small, micro and medium entrepreneurs play an important role in economic ... such as production, marketing and management to adequately service the ...

  14. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)

    2012-07-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  15. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.

    2012-01-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  16. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    Science.gov (United States)

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  17. Brain tumor locating in 3D MR volume using symmetry

    Science.gov (United States)

    Dvorak, Pavel; Bartusek, Karel

    2014-03-01

    This work deals with the automatic determination of a brain tumor location in 3D magnetic resonance volumes. The aim of this work is not the precise segmentation of the tumor and its parts but only the detection of its location. This work is the first step in the tumor segmentation process, an important topic in neuro-image processing. The algorithm expects 3D magnetic resonance volumes of brain containing a tumor. The detection is based on locating the area that breaks the left-right symmetry of the brain. This is done by multi-resolution comparing of corresponding regions in left and right hemisphere. The output of the computation is the probabilistic map of the tumor location. The created algorithm was tested on 80 volumes from publicly available BRATS databases containing 3D brain volumes afflicted by a brain tumor. These pathological structures had various sizes and shapes and were located in various parts of the brain. The locating performance of the algorithm was 85% for T1-weighted volumes, 91% for T1-weighted contrast enhanced volumes, 96% for FLAIR and T2-wieghted volumes and 95% for their combinations.

  18. Diagnostic accuracy of a volume-rendered computed tomography movie and other computed tomography-based imaging methods in assessment of renal vascular anatomy for laparoscopic donor nephrectomy.

    Science.gov (United States)

    Yamamoto, Shingo; Tanooka, Masao; Ando, Kumiko; Yamano, Toshiko; Ishikura, Reiichi; Nojima, Michio; Hirota, Shozo; Shima, Hiroki

    2009-12-01

    To evaluate the diagnostic accuracy of computed tomography (CT)-based imaging methods for assessing renal vascular anatomy, imaging studies, including standard axial CT, three-dimensional volume-rendered CT (3DVR-CT), and a 3DVR-CT movie, were performed on 30 patients who underwent laparoscopic donor nephrectomy (10 right side, 20 left side) for predicting the location of the renal arteries and renal, adrenal, gonadal, and lumbar veins. These findings were compared with videos obtained during the operation. Two of 37 renal arteries observed intraoperatively were missed by standard axial CT and 3DVR-CT, whereas all arteries were identified by the 3DVR-CT movie. Two of 36 renal veins were missed by standard axial CT and 3DVR-CT, whereas 1 was missed by the 3DVR-CT movie. In 20 left renal hilar anatomical structures, 20 adrenal, 20 gonadal, and 22 lumbar veins were observed during the operation. Preoperatively, the standard axial CT, 3DVR-CT, and 3DVR-CT movie detected 11, 19, and 20 adrenal veins; 13, 14, and 19 gonadal veins; and 6, 11, and 15 lumbar veins, respectively. Overall, of 135 renal vascular structures, the standard axial CT, 3DVR-CT, and 3DVR-CT movie accurately detected 99 (73.3%), 113 (83.7%), and 126 (93.3%) vessels, respectively, which indicated that the 3DVR-CT movie demonstrated a significantly higher detection rate than other CT-based imaging methods (P renal vascular anatomy before laparoscopic donor nephrectomy.

  19. Ultrasound automated volume calculation in reproduction and in pregnancy.

    Science.gov (United States)

    Ata, Baris; Tulandi, Togas

    2011-06-01

    To review studies assessing the application of ultrasound automated volume calculation in reproductive medicine. We performed a literature search using the keywords "SonoAVC, sonography-based automated volume calculation, automated ultrasound, 3D ultrasound, antral follicle, follicle volume, follicle monitoring, follicle tracking, in vitro fertilization, controlled ovarian hyperstimulation, embryo volume, embryonic volume, gestational sac, and fetal volume" and conducted the search in PubMed, Medline, EMBASE, and the Cochrane Database of Systematic Reviews. Reference lists of identified reports were manually searched for other relevant publications. Automated volume measurements are in very good agreement with actual volumes of the assessed structures or with other validated measurement methods. The technique seems to provide reliable and highly reproducible results under a variety of conditions. Automated measurements take less time than manual measurements. Ultrasound automated volume calculation is a promising new technology which is already used in daily practice especially for assisted reproduction. Improvements to the technology will undoubtedly render it more effective and increase its use. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  20. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  1. GPU Pro 4 advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2013-01-01

    GPU Pro4: Advanced Rendering Techniques presents ready-to-use ideas and procedures that can help solve many of your day-to-day graphics programming challenges. Focusing on interactive media and games, the book covers up-to-date methods producing real-time graphics. Section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Michal Valient, Wessam Bahnassi, and Sebastien St-Laurent have once again assembled a high-quality collection of cutting-edge techniques for advanced graphics processing unit (GPU) programming. Divided into six sections, the book begins with discussions on the abi

  2. Validation of Thermal Lethality against Salmonella enterica in Poultry Offal during Rendering.

    Science.gov (United States)

    Jones-Ibarra, Amie-Marie; Acuff, Gary R; Alvarado, Christine Z; Taylor, T Matthew

    2017-09-01

    Recent outbreaks of human disease following contact with companion animal foods cross-contaminated with enteric pathogens, such as Salmonella enterica, have resulted in increased concern regarding the microbiological safety of animal foods. Additionally, the U.S. Food and Drug Administration Food Safety Modernization Act and its implementing rules have stipulated the implementation of current good manufacturing practices and food safety preventive controls for livestock and companion animal foods. Animal foods and feeds are sometimes formulated to include thermally rendered animal by-product meals. The objective of this research was to determine the thermal inactivation of S. enterica in poultry offal during rendering at differing temperatures. Raw poultry offal was obtained from a commercial renderer and inoculated with a mixture of Salmonella serovars Senftenberg, Enteritidis, and Gallinarum (an avian pathogen) prior to being subjected to heating at 150, 155, or 160°F (65.5, 68.3, or 71.1°C) for up to 15 min. Following heat application, surviving Salmonella bacteria were enumerated. Mean D-values for the Salmonella cocktail at 150, 155, and 160°F were 0.254 ± 0.045, 0.172 ± 0.012, and 0.086 ± 0.004 min, respectively, indicative of increasing susceptibility to increased application of heat during processing. The mean thermal process constant (z-value) was 21.948 ± 3.87°F. Results indicate that a 7.0-log-cycle inactivation of Salmonella may be obtained from the cumulative lethality encountered during the heating come-up period and subsequent rendering of raw poultry offal at temperatures not less than 150°F. Current poultry rendering procedures are anticipated to be effective for achieving necessary pathogen control when completed under sanitary conditions.

  3. Multidetector-row computed tomography in the preoperative diagnosis of intestinal complications caused by clinically unsuspected ingested dietary foreign bodies: a case series emphasizing the use of volume rendering techniques

    Energy Technology Data Exchange (ETDEWEB)

    Teixeira, Augusto Cesar Vieira; Torres, Ulysses dos Santos; Oliveira, Eduardo Portela de; Gual, Fabiana; Bauab Junior, Tufik, E-mail: usantor@yahoo.com.br [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Serv. de Radiologia e Diagnostico por Imagem; Westin, Carlos Eduardo Garcia [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Cirurgia Geral; Cardoso, Luciana Vargas [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Setor de Tomografia Computadorizada

    2013-11-15

    Objective: the present study was aimed at describing a case series where a preoperative diagnosis of intestinal complications secondary to accidentally ingested dietary foreign bodies was made by multidetector-row computed tomography (MDCT), with emphasis on complementary findings yielded by volume rendering techniques (VRT) and curved multiplanar reconstructions (MPR). Materials and Methods: The authors retrospectively assessed five patients with surgically confirmed intestinal complications (perforation and/or obstruction) secondary to unsuspected ingested dietary foreign bodies, consecutively assisted in their institution between 2010 and 2012. Demographic, clinical, laboratory and radiological data were analyzed. VRT and curved MPR were subsequently performed. Results: preoperative diagnosis of intestinal complications was originally performed in all cases. In one case the presence of a foreign body was not initially identified as the causal factor, and the use of complementary techniques facilitated its retrospective identification. In all cases these tools allowed a better depiction of the entire foreign bodies on a single image section, contributing to the assessment of their morphology. Conclusion: although the use of complementary techniques has not had a direct impact on diagnostic performance in most cases of this series, they may provide a better depiction of foreign bodies' morphology on a single image section. (author)

  4. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    Science.gov (United States)

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  5. A comparison of several cluster algorithms on artificial binary data [Part 2]. Scenarios from travel market segmentation. Part 2 (Addition to Working Paper No. 7).

    OpenAIRE

    Dolnicar, Sara; Leisch, Friedrich; Steiner, Gottfried; Weingessel, Andreas

    1998-01-01

    The search for clusters in empirical data is an important and often encountered research problem. Numerous algorithms exist that are able to render groups of objects or individuals. Of course each algorithm has its strengths and weaknesses. In order to identify these crucial points artificial data was generated - based primarily on experience with structures of empirical data - and used as benchmark for evaluating the results of numerous cluster algorithms. This work is an addition to SFB Wor...

  6. Sophisticated visualization algorithms for analysis of multidimensional experimental nuclear spectra

    International Nuclear Information System (INIS)

    Morhac, M.; Kliman, J.; Matousek, V.; Turzo, I.

    2004-01-01

    This paper describes graphical models of visualization of 2-, 3-, 4-dimensional scalar data used in nuclear data acquisition, processing and visualization system developed at the Institute of Physics, Slovak Academy of Sciences. It focuses on presentation of nuclear spectra (histograms). However it can be successfully applied for visualization of arrays of other data types. In the paper we present conventional as well as new developed surface and volume rendering visualization techniques used (Authors)

  7. Streaming Model Based Volume Ray Casting Implementation for Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Jusub Kim

    2009-01-01

    Full Text Available Interactive high quality volume rendering is becoming increasingly more important as the amount of more complex volumetric data steadily grows. While a number of volumetric rendering techniques have been widely used, ray casting has been recognized as an effective approach for generating high quality visualization. However, for most users, the use of ray casting has been limited to datasets that are very small because of its high demands on computational power and memory bandwidth. However the recent introduction of the Cell Broadband Engine (Cell B.E. processor, which consists of 9 heterogeneous cores designed to handle extremely demanding computations with large streams of data, provides an opportunity to put the ray casting into practical use. In this paper, we introduce an efficient parallel implementation of volume ray casting on the Cell B.E. The implementation is designed to take full advantage of the computational power and memory bandwidth of the Cell B.E. using an intricate orchestration of the ray casting computation on the available heterogeneous resources. Specifically, we introduce streaming model based schemes and techniques to efficiently implement acceleration techniques for ray casting on Cell B.E. In addition to ensuring effective SIMD utilization, our method provides two key benefits: there is no cost for empty space skipping and there is no memory bottleneck on moving volumetric data for processing. Our experimental results show that we can interactively render practical datasets on a single Cell B.E. processor.

  8. Graphics Gems III IBM version

    CERN Document Server

    Kirk, David

    1994-01-01

    This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a

  9. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  10. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  11. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  12. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  13. 7 CFR 54.1016 - Advance information concerning service rendered.

    Science.gov (United States)

    2010-01-01

    ... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...

  14. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  15. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Kennedy, A.D.

    1989-01-01

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

  16. Simulation of the radiography formation process from CT patient volume

    International Nuclear Information System (INIS)

    Bifulco, P.; Cesarelli, M.; Verso, E.; Roccasalva Firenze, M.; Sansone, M.; Bracale, M.

    1998-01-01

    The aim of this work is to develop an algorithm to simulate the radiographic image formation process using volumetric anatomical data of the patient, obtained from 3D diagnostic CT images. Many applications, including radiographic driven surgery, virtual reality in medicine and radiologist teaching and training, may take advantage of such technique. The designed algorithm has been developed to simulate a generic radiographic equipment, whatever oriented respect to the patient. The simulated radiography is obtained considering a discrete number of X-ray paths departing from the focus, passing through the patient volume and reaching the radiographic plane. To evaluate a generic pixel of the simulated radiography, the cumulative absorption along the corresponding X-ray is computed. To estimate X-ray absorption in a generic point of the patient volume, 3D interpolation of CT data has been adopted. The proposed technique is quite similar to those employed in Ray Tracing. A computer designed test volume has been used to assess the reliability of the radiography simulation algorithm as a measuring tool. From the errors analysis emerges that the accuracy achieved by the radiographic simulation algorithm is largely confined within the sampling step of the CT volume. (authors)

  17. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    Science.gov (United States)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  18. A Single Swede Midge (Diptera: Cecidomyiidae) Larva Can Render Cauliflower Unmarketable.

    Science.gov (United States)

    Stratton, Chase A; Hodgdon, Elisabeth A; Zuckerman, Samuel G; Shelton, Anthony M; Chen, Yolanda H

    2018-05-01

    Swede midge, Contarinia nasturtii Kieffer (Diptera: Cecidomyiidae), is an invasive pest causing significant damage on Brassica crops in the Northeastern United States and Eastern Canada. Heading brassicas, like cauliflower, appear to be particularly susceptible. Swede midge is difficult to control because larvae feed concealed inside meristematic tissues of the plant. In order to develop damage and marketability thresholds necessary for integrated pest management, it is important to determine how many larvae render plants unmarketable and whether the timing of infestation affects the severity of damage. We manipulated larval density (0, 1, 3, 5, 10, or 20) per plant and the timing of infestation (30, 55, and 80 d after seeding) on cauliflower in the lab and field to answer the following questions: 1) What is the swede midge damage threshold? 2) How many swede midge larvae can render cauliflower crowns unmarketable? and 3) Does the age of cauliflower at infestation influence the severity of damage and marketability? We found that even a single larva can cause mild twisting and scarring in the crown rendering cauliflower unmarketable 52% of the time, with more larvae causing more severe damage and additional losses, regardless of cauliflower age at infestation.

  19. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    Science.gov (United States)

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a

  20. PET functional volume delineation: a robustness and repeatability study

    International Nuclear Information System (INIS)

    Hatt, Mathieu; Cheze-le Rest, Catherine; Albarghach, Nidal; Pradier, Olivier; Visvikis, Dimitris

    2011-01-01

    Current state-of-the-art algorithms for functional uptake volume segmentation in PET imaging consist of threshold-based approaches, whose parameters often require specific optimization for a given scanner and associated reconstruction algorithms. Different advanced image segmentation approaches previously proposed and extensively validated, such as among others fuzzy C-means (FCM) clustering, or fuzzy locally adaptive bayesian (FLAB) algorithm have the potential to improve the robustness of functional uptake volume measurements. The objective of this study was to investigate robustness and repeatability with respect to various scanner models, reconstruction algorithms and acquisition conditions. Robustness was evaluated using a series of IEC phantom acquisitions carried out on different PET/CT scanners (Philips Gemini and Gemini Time-of-Flight, Siemens Biograph and GE Discovery LS) with their associated reconstruction algorithms (RAMLA, TF MLEM, OSEM). A range of acquisition parameters (contrast, duration) and reconstruction parameters (voxel size) were considered for each scanner model, and the repeatability of each method was evaluated on simulated and clinical tumours and compared to manual delineation. For all the scanner models, acquisition parameters and reconstruction algorithms considered, the FLAB algorithm demonstrated higher robustness in delineation of the spheres with low mean errors (10%) and variability (5%), with respect to threshold-based methodologies and FCM. The repeatability provided by all segmentation algorithms considered was very high with a negligible variability of <5% in comparison to that associated with manual delineation (5-35%). The use of advanced image segmentation algorithms may not only allow high accuracy as previously demonstrated, but also provide a robust and repeatable tool to aid physicians as an initial guess in determining functional volumes in PET. (orig.)

  1. Mastering Mental Ray Rendering Techniques for 3D and CAD Professionals

    CERN Document Server

    O'Connor, Jennifer

    2010-01-01

    Proven techniques for using mental ray effectively. If you're a busy artist seeking high-end results for your 3D, design, or architecture renders using mental ray, this is the perfect book for you. It distills the highly technical nature of rendering into easy-to-follow steps and tutorials that you can apply immediately to your own projects. The book uses 3ds Max and 3ds Max Design to show the integration with mental ray, but users of any 3D or CAD software can learn valuable techniques for incorporating mental ray into their pipelines.: Takes you under the hood of mental ray, a stand-alone or

  2. Water driven leaching of biocides from paints and renders

    DEFF Research Database (Denmark)

    Bester, Kai; Vollertsen, Jes; Bollmann, Ulla E

    ) were so high, that rather professional urban gardening (flower and greenhouses) than handling of biocides from construction materials seem to be able to explain the findings. While the use in agriculture is restricted, the use in greenhouses is currently considered legal in Denmark. Leaching....../partitioning: Considering material properties, it was found out that, for all of the compounds the sorption (and leaching) is highly pH-dependent. It must be take into account that the pH in the porewater of the tested render materials is between 9 and 10 while the rainwater is around 5, thus making prediction difficult...... at this stage. For some of the compounds the sorption is dependent on the amount of polymer in the render, while it is only rarely of importance which polymer is used. Considering the interaction of weather with the leaching of biocides from real walls it turned out that a lot of parameters such as irradiation...

  3. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  4. Development of a computer simulation system of intraoral radiography using perspective volume rendering of CT data

    International Nuclear Information System (INIS)

    Okamura, Kazutoshi; Tanaka, Takemasa; Yoshiura, Kazunori; Tokumori, Kenji; Kanda, Shigenobu

    2002-01-01

    The purpose of this study was to evaluate the usefulness of a computer simulation system for intraoral radiography as an educational aid for radiographic training for dental students. A dried skull was scanned with a multidetector CT, and the series of slice data was transferred to a workstation. A software AVS Express Developer was used to construct the x-ray projected images from the CT slice data. Geometrical reproducibilities were confirmed using numerical phantoms. We simulated images using the perspective projection method with an average value algorithm on this software. Simulated images were compared with conventional film images projected from the same geometrical positions, including eccentric projection. Furthermore, to confirm the changes of the image depending on the projection angles of the x-ray beam, we constructed simulation images in which the root apexes were enhanced with the maximum value algorithm. Using this method, high resolution simulated images with perspective projection, as opposed to parallel, were constructed. Comparing with conventional film images, all major anatomic components could be visualized easily. Any intraoral radiographs at an arbitrary angular projection could be simulated, which was impossible in the conventional training schema for radiographic technique. Therefore, not only standard projected images but also eccentric projections could be displayed. A computer simulation system of intraoral radiography with this method may be useful for training in intraoral radiographic technique for dental students. (author)

  5. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    Science.gov (United States)

    Grasnick, Armin

    2017-06-01

    technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  6. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  7. An efficicient data structure for three-dimensional vertex based finite volume method

    Science.gov (United States)

    Akkurt, Semih; Sahin, Mehmet

    2017-11-01

    A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.

  8. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    Science.gov (United States)

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  9. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  10. Pure JavaScript Storyline Layout Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    2017-10-02

    This is a JavaScript library for a storyline layout algorithm. Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The library in this disclosure takes as input a list of objects containing an id, time, and state. The output is a data structure that can be used to conveniently render a storyline visualization. Most importantly, the library computes the y-coordinate of the entities over time that decreases layout artifacts including crossings, wiggles, and whitespace. This is accomplished through multi-objective, multi-stage optimization problem, where the output of one stage produces input and constraints for the next stage.

  11. Long-term, low-level radwaste volume-reduction strategies. Volume 4. Waste disposal costs. Final report

    International Nuclear Information System (INIS)

    Sutherland, A.A.; Adam, J.A.; Rogers, V.C.; Merrell, G.B.

    1984-11-01

    Volume 4 establishes pricing levels at new shallow land burial grounds. The following conclusions can be drawn from the analyses described in the preceding chapters: Application of volume reduction techniques by utilities can have a significant impact on the volumes of wastes going to low-level radioactive waste disposal sites. Using the relative waste stream volumes in NRC81 and the maximum volume reduction ratios provided by Burns and Roe, Inc., it was calculated that if all utilities use maximum volume reduction the rate of waste receipt at disposal sites will be reduced by 40 percent. When a disposal site receives a lower volume of waste its total cost of operation does not decrease by the same proportion. Therefore the average cost for a unit volume of waste received goes up. Whether the disposal site operator knows in advance that he will receive a smaller amount of waste has little influence on the average unit cost ($/ft) of the waste disposed. For the pricing algorithm postulated, the average disposal cost to utilities that volume reduce is relatively independent of whether all utilities practice volume reduction or only a few volume reduce. The general effect of volume reduction by utilities is to reduce their average disposal site costs by a factor of between 1.5 to 2.5. This factor is generally independent of the size of the disposal site. The largest absolute savings in disposal site costs when utilities volume reduce occurs when small disposal sites are involved. This results from the fact that unit costs are higher at small sites. Including in the pricing algorithm a factor that penalizes waste generators who contribute larger amounts of the mobile nuclides 3 H, 14 C, 99 Tc, and 129 I, which may be the subject of site inventory limits, lowers unit disposal costs for utility wastes that contain only small amounts of the nuclides and raises unit costs for other utility wastes

  12. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    Science.gov (United States)

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT.

  13. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  14. Predicting the long-term durability of hemp–lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations

    International Nuclear Information System (INIS)

    Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe

    2016-01-01

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp–lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp–lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp–lime mixes. This study has demonstrated that hemp–lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. - Highlights: • Realistic simulations in the cabinet of one-year exposure to environmental conditions • Influence of the lime type on the durability of hemp–lime renders

  15. Predicting the long-term durability of hemp–lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Arizzi, Anna, E-mail: anna.arizzi@ouce.ox.ac.uk [School of Geography and the Environment, University of Oxford, Dyson Perrins Building, South Parks Road, Oxford OX1 3QY (United Kingdom); Viles, Heather [School of Geography and the Environment, University of Oxford, Dyson Perrins Building, South Parks Road, Oxford OX1 3QY (United Kingdom); Martín-Sanchez, Inés [Departamento de Microbiología, Universidad de Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Cultrone, Giuseppe [Departamento de Mineralogía y Petrología, Universidad de Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain)

    2016-01-15

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp–lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp–lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp–lime mixes. This study has demonstrated that hemp–lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. - Highlights: • Realistic simulations in the cabinet of one-year exposure to environmental conditions • Influence of the lime type on the durability of hemp–lime renders

  16. Leveraging Disturbance Observer Based Torque Control for Improved Impedance Rendering with Series Elastic Actuators

    Science.gov (United States)

    Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.

    2015-01-01

    The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.

  17. Custom OpenStreetMap Rendering – OpenTrackMap Experience

    Directory of Open Access Journals (Sweden)

    Radek Bartoň

    2010-02-01

    Full Text Available After 5 years of its existence, the OpenSteetMap [1] is becoming to be an important and valuable source of a geographic data for all people on the world. Although initially targeted to provide a map of cities for routing services, it can be exploited to other and often unexpected purposes. Such an utilization is an effort to map a network of hiking tracks of the Czech Tourist Club [2].  To support and apply this endeavour, the OpenTrackMap [3] project was started. Its aim is to primarily provide a customized rendering style for Mapnik renderer which emphasizes map features important to tourists and displays a layer with hiking tracks. This article presents obstacles which such project must face and it can be used as a tutorial for other projects of similar type.

  18. Simulation of the radiography formation process from CT patient volume

    Energy Technology Data Exchange (ETDEWEB)

    Bifulco, P; Cesarelli, M; Verso, E; Roccasalva Firenze, M; Sansone, M; Bracale, M [University of Naples, Federico II, Electronic Engineering Department, Bioengineering Unit, Via Claudio, 21 - 80125 Naples (Italy)

    1999-12-31

    The aim of this work is to develop an algorithm to simulate the radiographic image formation process using volumetric anatomical data of the patient, obtained from 3D diagnostic CT images. Many applications, including radiographic driven surgery, virtual reality in medicine and radiologist teaching and training, may take advantage of such technique. The designed algorithm has been developed to simulate a generic radiographic equipment, whatever oriented respect to the patient. The simulated radiography is obtained considering a discrete number of X-ray paths departing from the focus, passing through the patient volume and reaching the radiographic plane. To evaluate a generic pixel of the simulated radiography, the cumulative absorption along the corresponding X-ray is computed. To estimate X-ray absorption in a generic point of the patient volume, 3D interpolation of CT data has been adopted. The proposed technique is quite similar to those employed in Ray Tracing. A computer designed test volume has been used to assess the reliability of the radiography simulation algorithm as a measuring tool. From the errors analysis emerges that the accuracy achieved by the radiographic simulation algorithm is largely confined within the sampling step of the CT volume. (authors) 16 refs., 12 figs., 1 tabs.

  19. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Lamare, F [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609, (France); Boussion, N [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Turzo, A [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Collet, C [Ecole Nationale Superieure de Physique de Strasbourg (ENSPS), ULP, Strasbourg, F-67000 (France); Salzenstein, F [Institut d' Electronique du Solide et des Systemes (InESS), ULP, Strasbourg, F-67000 (France); Roux, C [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Jarritt, P [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Carson, K [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Rest, C Cheze-Le [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Visvikis, D [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France)

    2007-07-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm{sup 3} and 64 mm{sup 3}). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The

  20. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A; Toga, Arthur W; Thompson, Paul M

    2011-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic "Demons" algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future.

  1. Field Operations and Enforcement Manual for Air Pollution Control. Volume III: Inspection Procedures for Specific Industries.

    Science.gov (United States)

    Weisburd, Melvin I.

    The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…

  2. Stereoscopy in diagnostic radiology and procedure planning: does stereoscopic assessment of volume-rendered CT angiograms lead to more accurate characterisation of cerebral aneurysms compared with traditional monoscopic viewing?

    International Nuclear Information System (INIS)

    Stewart, Nikolas; Lock, Gregory; Coucher, John; Hopcraft, Anthony

    2014-01-01

    Stereoscopic vision is a critical part of the human visual system, conveying more information than two-dimensional, monoscopic observation alone. This study aimed to quantify the contribution of stereoscopy in assessment of radiographic data, using widely available three-dimensional (3D)-capable display monitors by assessing whether stereoscopic viewing improved the characterisation of cerebral aneurysms. Nine radiology registrars were shown 40 different volume-rendered (VR) models of cerebral computed tomography angiograms (CTAs), each in both monoscopic and stereoscopic format and then asked to record aneurysm characteristics on short multiple-choice answer sheets. The monitor used was a current model commercially available 3D television. Responses were marked against a gold standard of assessments made by a consultant radiologist, using the original CT planar images on a diagnostic radiology computer workstation. The participants' results were fairly homogenous, with most showing no difference in diagnosis using stereoscopic VR models. One participant performed better on the monoscopic VR models. On average, monoscopic VRs achieved a slightly better diagnosis by 2.0%. Stereoscopy has a long history, but it has only recently become technically feasible for stored cross-sectional data to be adequately reformatted and displayed in this format. Scant literature exists to quantify the technology's possible contribution to medical imaging - this study attempts to build on this limited knowledge base and promote discussion within the field. Stereoscopic viewing of images should be further investigated and may well eventually find a permanent place in procedural and diagnostic medical imaging.

  3. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    Science.gov (United States)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  4. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  5. VIDEO ANIMASI 3D PENGENALAN RUMAH ADAT DAN ALAT MUSIK KEPRI DENGAN MENGUNAKAN TEKNIK RENDER CEL-SHADING

    OpenAIRE

    Jianfranco Irfian Asnawi; Afdhol Dzikri

    2016-01-01

    Animasi ini berjudul "video animasi 3D rumah adat dan alat musik Kepulauan Riau dengan menggunakan teknik render cel-shading" merupakan video yang bertujuan memperkenalkan alat-alat musik yang berasal dari kepulauan riau, Animasi ini akan diterapkan dengan menggunakan teknik render cel-shading. Cel-shading adalah teknik render yang menampilkan grafik 3D yang menyerupai gambar tangan, seperti gambar komik dan kartun. Teknik ini juga sudah di terapkan dalam game 3D yang ternyata menarik banyak ...

  6. Subsurface Scattering-Based Object Rendering Techniques for Real-Time Smartphone Games

    Directory of Open Access Journals (Sweden)

    Won-Sun Lee

    2014-01-01

    Full Text Available Subsurface scattering that simulates the path of a light through the material in a scene is one of the advanced rendering techniques in the field of computer graphics society. Since it takes a number of long operations, it cannot be easily implemented in real-time smartphone games. In this paper, we propose a subsurface scattering-based object rendering technique that is optimized for smartphone games. We employ our subsurface scattering method that is utilized for a real-time smartphone game. And an example game is designed to validate how the proposed method can be operated seamlessly in real time. Finally, we show the comparison results between bidirectional reflectance distribution function, bidirectional scattering distribution function, and our proposed subsurface scattering method on a smartphone game.

  7. The effects of multiview depth video compression on multiview rendering

    NARCIS (Netherlands)

    Merkle, P.; Morvan, Y.; Smolic, A.; Farin, D.S.; Mueller, K.; With, de P.H.N.; Wiegang, T.

    2009-01-01

    This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient

  8. Evaluation of segmentation algorithms for generation of patient models in radiofrequency hyperthermia

    International Nuclear Information System (INIS)

    Wust, P.; Gellermann, J.; Beier, J.; Tilly, W.; Troeger, J.; Felix, R.; Wegner, S.; Oswald, H.; Stalling, D.; Hege, H.C.; Deuflhard, P.

    1998-01-01

    Time-efficient and easy-to-use segmentation algorithms (contour generation) are a precondition for various applications in radiation oncology, especially for planning purposes in hyperthermia. We have developed the three following algorithms for contour generation and implemented them in an editor of the HyperPlan hyperthermia planning system. Firstly, a manual contour input with numerous correction and editing options. Secondly, a volume growing algorithm with adjustable threshold range and minimal region size. Thirdly, a watershed transformation in two and three dimensions. In addition, the region input function of the Helax commercial radiation therapy planning system was available for comparison. All four approaches were applied under routine conditions to two-dimensional computed tomographic slices of the superior thoracic aperture, mid-chest, upper abdomen, mid-abdomen, pelvis and thigh; they were also applied to a 3D CT sequence of 72 slices using the three-dimensional extension of the algorithms. Time to generate the contours and their quality with respect to a reference model were determined. Manual input for a complete patient model required approximately 5 to 6 h for 72 CT slices (4.5 min/slice). If slight irregularities at object boundaries are accepted, this time can be reduced to 3.5 min/slice using the volume growing algorithm. However, generating a tetrahedron mesh from such a contour sequence for hyperthermia planning (the basis for finite-element algorithms) requires a significant amount of postediting. With the watershed algorithm extended to three dimensions, processing time can be further reduced to 3 min/slice while achieving satisfactory contour quality. Therefore, this method is currently regarded as offering some potential for efficient automated model generation in hyperthermia. In summary, the 3D volume growing algorithm and watershed transformation are both suitable for segmentation of even low-contrast objects. However, they are not

  9. One-Dimensional Haptic Rendering Using Audio Speaker with Displacement Determined by Inductance

    Directory of Open Access Journals (Sweden)

    Avin Khera

    2016-03-01

    Full Text Available We report overall design considerations and preliminary results for a new haptic rendering device based on an audio loudspeaker. Our application models tissue properties during microsurgery. For example, the device could respond to the tip of a tool by simulating a particular tissue, displaying a desired compressibility and viscosity, giving way as the tissue is disrupted, or exhibiting independent motion, such as that caused by pulsations in blood pressure. Although limited to one degree of freedom and with a relatively small range of displacement compared to other available haptic rendering devices, our design exhibits high bandwidth, low friction, low hysteresis, and low mass. These features are consistent with modeling interactions with delicate tissues during microsurgery. In addition, our haptic rendering device is designed to be simple and inexpensive to manufacture, in part through an innovative method of measuring displacement by existing variations in the speaker’s inductance as the voice coil moves over the permanent magnet. Low latency and jitter are achieved by running the real-time simulation models on a dedicated microprocessor, while maintaining bidirectional communication with a standard laptop computer for user controls and data logging.

  10. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    International Nuclear Information System (INIS)

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  11. About the use of the Monte-Carlo code based tracing algorithm and the volume fraction method for S n full core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gurevich, M. I.; Oleynik, D. S. [RRC Kurchatov Inst., Kurchatov Sq., 1, 123182, Moscow (Russian Federation); Russkov, A. A.; Voloschenko, A. M. [Keldysh Inst. of Applied Mathematics, Miusskaya Sq., 4, 125047, Moscow (Russian Federation)

    2006-07-01

    The tracing algorithm that is implemented in the geometrical module of Monte-Carlo transport code MCU is applied to calculate the volume fractions of original materials by spatial cells of the mesh that overlays problem geometry. In this way the 3D combinatorial geometry presentation of the problem geometry, used by MCU code, is transformed to the user defined 2D or 3D bit-mapped ones. Next, these data are used in the volume fraction (VF) method to approximate problem geometry by introducing additional mixtures for spatial cells, where a few original materials are included. We have found that in solving realistic 2D and 3D core problems a sufficiently fast convergence of the VF method takes place if the spatial mesh is refined. Virtually, the proposed variant of implementation of the VF method seems as a suitable geometry interface between Monte-Carlo and S{sub n} transport codes. (authors)

  12. Clinical Recommendations on Emergency Medical Care Rendering to Children with Acute Intoxication

    Directory of Open Access Journals (Sweden)

    A. A. Baranov

    2015-01-01

    Full Text Available The article is dedicated to the issue of intoxication in children. Acute accidental intoxication appears to be especially relevant for pediatric practice. Drugs, various chemicals frequently used in everyday life and in farming, as well as animal poisons, including snake poisons, may have a toxic effect on children. Specialists of professional associations of physicians “Russian Society of Emergency Medicine” and pediatricians “Union of Pediatricians of Russia” formulated and briefly described the main causes of acute intoxication in children, clinical manifestations and the most significant laboratory indicators of toxic manifestations for various substances, as well as therapy principles and algorithms for such conditions in compliance with principles of the evidence-based medicine. The article presents pathognomonic symptoms and peculiarities of drug intoxication, provides a description of mediator symptoms of intoxication with various substances, as well as the symptoms that may indicate toxic effect. The article contains a description of principles of correction of vital body functions, measures for removing toxic substances from the body and information on the main antidotes. Special attention is given to the most frequent types of intoxication (with organic acids, lye, naphazoline, paracetamol, snake poisons [viper bite]. The article lists stage of medical care rendering to children suffering from acute intoxication and presents prognosis and further management of pediatric patients suffering from such conditions. 

  13. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  14. Global left ventricular function in cardiac CT. Evaluation of an automated 3D region-growing segmentation algorithm

    International Nuclear Information System (INIS)

    Muehlenbruch, Georg; Das, Marco; Hohl, Christian; Wildberger, Joachim E.; Guenther, Rolf W.; Mahnken, Andreas H.; Rinck, Daniel; Flohr, Thomas G.; Koos, Ralf; Knackstedt, Christian

    2006-01-01

    The purpose was to evaluate a new semi-automated 3D region-growing segmentation algorithm for functional analysis of the left ventricle in multislice CT (MSCT) of the heart. Twenty patients underwent contrast-enhanced MSCT of the heart (collimation 16 x 0.75 mm; 120 kV; 550 mAseff). Multiphase image reconstructions with 1-mm axial slices and 8-mm short-axis slices were performed. Left ventricular volume measurements (end-diastolic volume, end-systolic volume, ejection fraction and stroke volume) from manually drawn endocardial contours in the short axis slices were compared to semi-automated region-growing segmentation of the left ventricle from the 1-mm axial slices. The post-processing-time for both methods was recorded. Applying the new region-growing algorithm in 13/20 patients (65%), proper segmentation of the left ventricle was feasible. In these patients, the signal-to-noise ratio was higher than in the remaining patients (3.2±1.0 vs. 2.6±0.6). Volume measurements of both segmentation algorithms showed an excellent correlation (all P≤0.0001); the limits of agreement for the ejection fraction were 2.3±8.3 ml. In the patients with proper segmentation the mean post-processing time using the region-growing algorithm was diminished by 44.2%. On the basis of a good contrast-enhanced data set, a left ventricular volume analysis using the new semi-automated region-growing segmentation algorithm is technically feasible, accurate and more time-effective. (orig.)

  15. Social signals and algorithmic trading of Bitcoin.

    Science.gov (United States)

    Garcia, David; Schweitzer, Frank

    2015-09-01

    The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.

  16. ACCURATUM: improved calcium volume scoring using a mesh-based algorithm - a phantom study

    International Nuclear Information System (INIS)

    Saur, Stefan C.; Szekely, Gabor; Alkadhi, Hatem; Desbiolles, Lotus; Cattin, Philippe C.

    2009-01-01

    To overcome the limitations of the classical volume scoring method for quantifying coronary calcifications, including accuracy, variability between examinations, and dependency on plaque density and acquisition parameters, a mesh-based volume measurement method has been developed. It was evaluated and compared with the classical volume scoring method for accuracy, i.e., the normalized volume (measured volume/ground-truthed volume), and for variability between examinations (standard deviation of accuracy). A cardiac computed-tomography (CT) phantom containing various cylindrical calcifications was scanned using different tube voltages and reconstruction kernels, at various positions and orientations on the CT table and using different slice thicknesses. Mean accuracy for all plaques was significantly higher (p<0.0001) for the proposed method (1.220±0.507) than for the classical volume score (1.896±1.095). In contrast to the classical volume score, plaque density (p=0.84), reconstruction kernel (p=0.19), and tube voltage (p=0.27) had no impact on the accuracy of the developed method. In conclusion, the method presented herein is more accurate than classical calcium scoring and is less dependent on tube voltage, reconstruction kernel, and plaque density. (orig.)

  17. 11th International Workshop on the Algorithmic Foundations of Robotics

    CERN Document Server

    Amato, Nancy; Isler, Volkan; Stappen, A

    2015-01-01

    This carefully edited volume is the outcome of the eleventh edition of the Workshop on Algorithmic Foundations of Robotics (WAFR), which is the premier venue showcasing cutting edge research in algorithmic robotics. The eleventh WAFR, which was held August 3-5, 2014 at Boğaziçi University in Istanbul, Turkey continued this tradition. This volume contains extended versions of the 42 papers presented at WAFR. These contributions highlight the cutting edge research in classical robotics problems (e.g.  manipulation, motion, path, multi-robot and kinodynamic planning), geometric and topological computation in robotics as well novel applications such as informative path planning, active sensing and surgical planning.  This book - rich by topics and authoritative contributors - is a unique reference on the current developments and new directions in the field of algorithmic foundations.  

  18. Rendering LGBTQ+ Visible in Nursing: Embodying the Philosophy of Caring Science.

    Science.gov (United States)

    Goldberg, Lisa; Rosenburg, Neal; Watson, Jean

    2017-06-01

    Although health care institutions continue to address the importance of diversity initiatives, the standard(s) for treatment remain historically and institutionally grounded in a sociocultural privileging of heterosexuality. As a result, lesbian, gay, bisexual, transgender, and queer (LGBTQ+) communities in health care remain largely invisible. This marked invisibility serves as a call to action, a renaissance of thinking within redefined boundaries and limitations. We must therefore refocus our habits of attention on the wholeness of persons and the diversity of their storied experiences as embodied through contemporary society. By rethinking current understandings of LGBTQ+ identities through innovative representation(s) of the media, music industry, and pop culture within a caring science philosophy, nurses have a transformative opportunity to render LGBTQ+ visible and in turn render a transformative opportunity for themselves.

  19. The Peshitta Rendering of Psalm 25: Spelling, Synonyms, and Syntax’

    NARCIS (Netherlands)

    Dyk, J.W.; Loopstra, J.; Sokoloff, M.

    2013-01-01

    The very act of making a translation implies that the rendered text will differ from the source text. The underlying presupposition is that the grammar, syntax, and semantics of the source and target languages are sufficiently divergent as to warrant a translation. Translations differ in how close

  20. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel; Dumusc, Raphael; Bilgili, Ahmet; Hernando, Juan; Eilemann, Stefan

    2014-01-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  1. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel

    2014-11-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  2. Variability of left ventricular ejection fraction and volumes with quantitative gated SPECT: influence of algorithm, pixel size and reconstruction parameters in small and normal-sized hearts

    International Nuclear Information System (INIS)

    Hambye, Anne-Sophie; Vervaet, Ann; Dobbeleir, Andre

    2004-01-01

    Several software packages are commercially available for quantification of left ventricular ejection fraction (LVEF) and volumes from myocardial gated single-photon emission computed tomography (SPECT), all of which display a high reproducibility. However, their accuracy has been questioned in patients with a small heart. This study aimed to evaluate the performances of different software and the influence of modifications in acquisition or reconstruction parameters on LVEF and volume measurements, depending on the heart size. In 31 patients referred for gated SPECT, 64 2 and 128 2 matrix acquisitions were consecutively obtained. After reconstruction by filtered back-projection (Butterworth, 0.4, 0.5 or 0.6 cycles/cm cut-off, order 6), LVEF and volumes were computed with different software [three versions of Quantitative Gated SPECT (QGS), the Emory Cardiac Toolbox (ECT) and the Stanford University (SU-Segami) Medical School algorithm] and processing workstations. Depending upon their end-systolic volume (ESV), patients were classified into two groups: group I (ESV>30 ml, n=14) and group II (ESV 2 to 128 2 were associated with significantly larger volumes as well as lower LVEF values. Increasing the filter cut-off frequency had the same effect. With SU-Segami, a larger matrix was associated with larger end-diastolic volumes and smaller ESVs, resulting in a highly significant increase in LVEF. Increasing the filter sharpness, on the other hand, had no influence on LVEF though the measured volumes were significantly larger. (orig.)

  3. Virtual reality in medicine-computer graphics and interaction techniques.

    Science.gov (United States)

    Haubner, M; Krapichler, C; Lösch, A; Englmeier, K H; van Eimeren, W

    1997-03-01

    This paper describes several new visualization and interaction techniques that enable the use of virtual environments for routine medical purposes. A new volume-rendering method supports shaded and transparent visualization of medical image sequences in real-time with an interactive threshold definition. Based on these rendering algorithms two complementary segmentation approaches offer an intuitive assistance for a wide range of requirements in diagnosis and therapy planning. In addition, a hierarchical data representation for geometric surface descriptions guarantees an optimal use of available hardware resources and prevents inaccurate visualization. The combination of the presented techniques empowers the improved human-machine interface of virtual reality to support every interactive task in medical three-dimensional (3-D) image processing, from visualization of unsegmented data volumes up to the simulation of surgical procedures.

  4. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  5. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, M.; Zio, E.; Canetta, R. [Polytechnic of Milan, Dept. of Nuclear Engineering, Milano (Italy)

    2005-07-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  6. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.; Canetta, R.

    2005-01-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  7. Physically Based Rendering in the Nightshade NG Visualization Platform

    Science.gov (United States)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  8. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  9. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    Science.gov (United States)

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  10. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin

    2012-01-01

    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  11. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm.

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    Full Text Available The Iterative Closest Point (ICP algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results.

  12. Comparison of two heterogeneity correction algorithms in pituitary gland treatments with intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N.; Weltman, Eduardo; Braga, Henrique F.

    2013-01-01

    The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)

  13. Partial volume and aliasing artefacts in helical cone-beam CT

    International Nuclear Information System (INIS)

    Zou Yu; Sidky, Emil Y; Pan, Xiaochuan

    2004-01-01

    A generalization of the quasi-exact algorithms of Kudo et al (2000 IEEE Trans. Med. Imaging 19 902-21) is developed that allows for data acquisition in a 'practical' frame for clinical diagnostic helical, cone-beam computed tomography (CT). The algorithm is investigated using data that model nonlinear partial volume averaging. This investigation leads to an understanding of aliasing artefacts in helical, cone-beam CT image reconstruction. An ad hoc scheme is proposed to mitigate artefacts due to the nonlinear partial volume and aliasing artefacts

  14. Image registration with auto-mapped control volumes

    International Nuclear Information System (INIS)

    Schreibmann, Eduard; Xing Lei

    2006-01-01

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction, in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of

  15. Fast mapping algorithm of lighting spectrum and GPS coordinates for a large area

    Science.gov (United States)

    Lin, Chih-Wei; Hsu, Ke-Fang; Hwang, Jung-Min

    2016-09-01

    In this study, we propose a fast rebuild technology for evaluating light quality in large areas. Outdoor light quality, which is measured by illuminance uniformity and the color rendering index, is difficult to conform after improvement. We develop an algorithm for a lighting quality mapping system and coordinates using a micro spectrometer and GPS tracker integrated with a quadcopter or unmanned aerial vehicle. After cruising at a constant altitude, lighting quality data is transmitted and immediately mapped to evaluate the light quality in a large area.

  16. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  17. Auto-recognition of surfaces and auto-generation of material removal volume for finishing process

    Science.gov (United States)

    Kataraki, Pramod S.; Salman Abu Mansor, Mohd

    2018-03-01

    Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.

  18. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The TerraBlocksTM 3D terrain data format and terrain-block-rendering methodology provides an enabling basis for successful commercial deployment of...

  19. Securing mobile ad hoc networks using danger theory-based artificial immune algorithm.

    Science.gov (United States)

    Abdelhaq, Maha; Alsaqour, Raed; Abdelhaq, Shawkat

    2015-01-01

    A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs.

  20. Korea Microlensing Telescope Network Microlensing Events from 2015: Event-finding Algorithm, Vetting, and Photometry

    Science.gov (United States)

    Kim, D.-J.; Kim, H.-W.; Hwang, K.-H.; Albrow, M. D.; Chung, S.-J.; Gould, A.; Han, C.; Jung, Y. K.; Ryu, Y.-H.; Shin, I.-G.; Yee, J. C.; Zhu, W.; Cha, S.-M.; Kim, S.-L.; Lee, C.-U.; Lee, D.-J.; Lee, Y.; Park, B.-G.; Pogge, R. W.; The KMTNet Collaboration

    2018-02-01

    We present microlensing events in the 2015 Korea Microlensing Telescope Network (KMTNet) data and our procedure for identifying these events. In particular, candidates were detected with a novel “completed-event” microlensing event-finder algorithm. The algorithm works by making linear fits to a ({t}0,{t}{eff},{u}0) grid of point-lens microlensing models. This approach is rendered computationally efficient by restricting u 0 to just two values (0 and 1), which we show is quite adequate. The implementation presented here is specifically tailored to the commission-year character of the 2015 data, but the algorithm is quite general and has already been applied to a completely different (non-KMTNet) data set. We outline expected improvements for 2016 and future KMTNet data. The light curves of the 660 “clear microlensing” and 182 “possible microlensing” events that were found in 2015 are presented along with our policy for their public release.

  1. Predicting the long-term durability of hemp-lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations.

    Science.gov (United States)

    Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe

    2016-01-15

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp-lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp-lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp-lime mixes. This study has demonstrated that hemp-lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

    OpenAIRE

    Massa, Francisco; Russell, Bryan; Aubry, Mathieu

    2015-01-01

    This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefi...

  3. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  4. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    Science.gov (United States)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  5. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  6. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    Science.gov (United States)

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Rendering potential wearable robot designs with the LOPES gait trainer.

    Science.gov (United States)

    Koopman, B; van Asseldonk, E H F; van der Kooij, H; van Dijk, W; Ronsse, R

    2011-01-01

    In recent years, wearable robots (WRs) for rehabilitation, personal assistance, or human augmentation are gaining increasing interest. To make these devices more energy efficient, radical changes to the mechanical structure of the device are being considered. However, it remains very difficult to predict how people will respond to, and interact with, WRs that differ in terms of mechanical design. Users may adjust their gait pattern in response to the mechanical restrictions or properties of the device. The goal of this pilot study is to show the feasibility of rendering the mechanical properties of different potential WR designs using the robotic gait training device LOPES. This paper describes a new method that selectively cancels the dynamics of LOPES itself and adds the dynamics of the rendered WR using two parallel inverse models. Adaptive frequency oscillators were used to get estimates of the joint position, velocity, and acceleration. Using the inverse models, different WR designs can be evaluated, eliminating the need to build several prototypes. As a proof of principle, we simulated the effect of a very simple WR that consisted of a mass attached to the ankles. Preliminary results show that we are partially able to cancel the dynamics of LOPES. Additionally, the simulation of the mass showed an increase in muscle activity but not in the same level as during the control, where subjects actually carried the mass. In conclusion, the results in this paper suggest that LOPES can be used to render different WRs. In addition, it is very likely that the results can be further optimized when more effort is put in retrieving proper estimations for the velocity and acceleration, which are required for the inverse models. © 2011 IEEE

  8. Rendering of HDR content on LDR displays: an objective approach

    Science.gov (United States)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  9. Deep Learning Algorithm for Auto-Delineation of High-Risk Oropharyngeal Clinical Target Volumes With Built-In Dice Similarity Coefficient Parameter Optimization Function.

    Science.gov (United States)

    Cardenas, Carlos E; McCarroll, Rachel E; Court, Laurence E; Elgohari, Baher A; Elhalawani, Hesham; Fuller, Clifton D; Kamal, Mona J; Meheissen, Mohamed A M; Mohamed, Abdallah S R; Rao, Arvind; Williams, Bowman; Wong, Andrew; Yang, Jinzhong; Aristophanous, Michalis

    2018-06-01

    Automating and standardizing the contouring of clinical target volumes (CTVs) can reduce interphysician variability, which is one of the largest sources of uncertainty in head and neck radiation therapy. In addition to using uniform margin expansions to auto-delineate high-risk CTVs, very little work has been performed to provide patient- and disease-specific high-risk CTVs. The aim of the present study was to develop a deep neural network for the auto-delineation of high-risk CTVs. Fifty-two oropharyngeal cancer patients were selected for the present study. All patients were treated at The University of Texas MD Anderson Cancer Center from January 2006 to August 2010 and had previously contoured gross tumor volumes and CTVs. We developed a deep learning algorithm using deep auto-encoders to identify physician contouring patterns at our institution. These models use distance map information from surrounding anatomic structures and the gross tumor volume as input parameters and conduct voxel-based classification to identify voxels that are part of the high-risk CTV. In addition, we developed a novel probability threshold selection function, based on the Dice similarity coefficient (DSC), to improve the generalization of the predicted volumes. The DSC-based function is implemented during an inner cross-validation loop, and probability thresholds are selected a priori during model parameter optimization. We performed a volumetric comparison between the predicted and manually contoured volumes to assess our model. The predicted volumes had a median DSC value of 0.81 (range 0.62-0.90), median mean surface distance of 2.8 mm (range 1.6-5.5), and median 95th Hausdorff distance of 7.5 mm (range 4.7-17.9) when comparing our predicted high-risk CTVs with the physician manual contours. These predicted high-risk CTVs provided close agreement to the ground-truth compared with current interobserver variability. The predicted contours could be implemented clinically, with only

  10. Unconscious neural processing differs with method used to render stimuli invisible

    Directory of Open Access Journals (Sweden)

    Sergey Victor Fogelson

    2014-06-01

    Full Text Available Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS or chromatic flicker fusion (CFF. In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

  11. Unconscious neural processing differs with method used to render stimuli invisible.

    Science.gov (United States)

    Fogelson, Sergey V; Kohler, Peter J; Miller, Kevin J; Granger, Richard; Tse, Peter U

    2014-01-01

    Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

  12. Evaluation of margining algorithms in commercial treatment planning systems

    International Nuclear Information System (INIS)

    Pooler, Alistair M.; Mayles, Helen M.; Naismith, Olivia F.; Sage, John P.; Dearnaley, David P.

    2008-01-01

    Introduction: During commissioning of the Pinnacle (Philips) treatment planning system (TPS) the margining algorithm was investigated and was found to produce larger PTVs than Plato (Nucletron) for identical GTVs. Subsequent comparison of PTV volumes resulting from the QA outlining exercise for the CHHIP (Conventional or Hypofractionated High Dose IMRT for Prostate Ca.) trial confirmed that there were differences in TPS's margining algorithms. Margining and the clinical impact of the different PTVs in seven different planning and virtual simulation systems (Pinnacle, Plato, Prosoma (MedCom), Eclipse (7.3 and 7.5) (Varian), MasterPlan (Nucletron), Xio (CMS) and Advantage Windows (AW) (GE)) is investigated, and a simple test for 3D margining consistency is proposed. Methods: Using each TPS, two different sets of prostate GTVs on 2.5 mm and 5 mm slices were margined according to the CHHIP protocol to produce PTV3 (prostate + 5 mm/0 mm post), PTV2 (PTV3 + 5 mm) and PTV1 (prostate and seminal vesicles + 10 mm). GTVs and PTVs were imported into Pinnacle for volume calculation. DVHs for 5 mm slice plans, created using the smallest PTVs, were recalculated on the largest PTV dataset and vice versa. Since adding a margin of 50 mm to a structure should give the same result as adding five margins of 10 mm, this was tested for each TPS (consistency test) using an octahedron as the GTV and CT datasets with 2.5 mm and 5 mm slices. Results: The CHHIP PTV3 and PTV1 volumes had a standard deviation, across the seven systems, of 5% and PTV2 (margined twice) 9%, on the 5 mm slices. For 2.5 mm slices the standard deviations were 4% and 6%. The ratio of the Pinnacle and the Eclipse 7.3 PTV2 volumes was 1.25. Rectal doses were significantly increased when encompassing Pinnacle PTVs (V 50 42.8%), compared to Eclipse 7.3 PTVs (V 50 = 36.4%). Conversely, fields that adequately treated an Eclipse 7.3 PTV2 were inadequate for a Pinnacle PTV2. AW and Plato PTV volumes were the most consistent

  13. Automated CT-based segmentation and quantification of total intracranial volume

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Carlos; Wahlund, Lars-Olof; Westman, Eric [Karolinska Institute, Department of Neurobiology, Care Sciences and Society (NVS), Division of Clinical Geriatrics, Stockholm (Sweden); Edholm, Kaijsa; Cavallin, Lena; Muller, Susanne; Axelsson, Rimma [Karolinska Institute, Department of Clinical Science, Intervention and Technology, Division of Medical Imaging and Technology, Stockholm (Sweden); Karolinska University Hospital in Huddinge, Department of Radiology, Stockholm (Sweden); Simmons, Andrew [King' s College London, Institute of Psychiatry, London (United Kingdom); NIHR Biomedical Research Centre for Mental Health and Biomedical Research Unit for Dementia, London (United Kingdom); Skoog, Ingmar [Gothenburg University, Department of Psychiatry and Neurochemistry, The Sahlgrenska Academy, Gothenburg (Sweden); Larsson, Elna-Marie [Uppsala University, Department of Surgical Sciences, Radiology, Akademiska Sjukhuset, Uppsala (Sweden)

    2015-11-15

    To develop an algorithm to segment and obtain an estimate of total intracranial volume (tICV) from computed tomography (CT) images. Thirty-six CT examinations from 18 patients were included. Ten patients were examined twice the same day and eight patients twice six months apart (these patients also underwent MRI). The algorithm combines morphological operations, intensity thresholding and mixture modelling. The method was validated against manual delineation and its robustness assessed from repeated imaging examinations. Using automated MRI software, the comparability with MRI was investigated. Volumes were compared based on average relative volume differences and their magnitudes; agreement was shown by a Bland-Altman analysis graph. We observed good agreement between our algorithm and manual delineation of a trained radiologist: the Pearson's correlation coefficient was r = 0.94, tICVml[manual] = 1.05 x tICVml[automated] - 33.78 (R{sup 2} = 0.88). Bland-Altman analysis showed a bias of 31 mL and a standard deviation of 30 mL over a range of 1265 to 1526 mL. tICV measurements derived from CT using our proposed algorithm have shown to be reliable and consistent compared to manual delineation. However, it appears difficult to directly compare tICV measures between CT and MRI. (orig.)

  14. Automated CT-based segmentation and quantification of total intracranial volume

    International Nuclear Information System (INIS)

    Aguilar, Carlos; Wahlund, Lars-Olof; Westman, Eric; Edholm, Kaijsa; Cavallin, Lena; Muller, Susanne; Axelsson, Rimma; Simmons, Andrew; Skoog, Ingmar; Larsson, Elna-Marie

    2015-01-01

    To develop an algorithm to segment and obtain an estimate of total intracranial volume (tICV) from computed tomography (CT) images. Thirty-six CT examinations from 18 patients were included. Ten patients were examined twice the same day and eight patients twice six months apart (these patients also underwent MRI). The algorithm combines morphological operations, intensity thresholding and mixture modelling. The method was validated against manual delineation and its robustness assessed from repeated imaging examinations. Using automated MRI software, the comparability with MRI was investigated. Volumes were compared based on average relative volume differences and their magnitudes; agreement was shown by a Bland-Altman analysis graph. We observed good agreement between our algorithm and manual delineation of a trained radiologist: the Pearson's correlation coefficient was r = 0.94, tICVml[manual] = 1.05 x tICVml[automated] - 33.78 (R 2 = 0.88). Bland-Altman analysis showed a bias of 31 mL and a standard deviation of 30 mL over a range of 1265 to 1526 mL. tICV measurements derived from CT using our proposed algorithm have shown to be reliable and consistent compared to manual delineation. However, it appears difficult to directly compare tICV measures between CT and MRI. (orig.)

  15. 2nd International Conference on Harmony Search Algorithm

    CERN Document Server

    Geem, Zong

    2016-01-01

    The Harmony Search Algorithm (HSA) is one of the most well-known techniques in the field of soft computing, an important paradigm in the science and engineering community.  This volume, the proceedings of the 2nd International Conference on Harmony Search Algorithm 2015 (ICHSA 2015), brings together contributions describing the latest developments in the field of soft computing with a special focus on HSA techniques. It includes coverage of new methods that have potentially immense application in various fields. Contributed articles cover aspects of the following topics related to the Harmony Search Algorithm: analytical studies; improved, hybrid and multi-objective variants; parameter tuning; and large-scale applications.  The book also contains papers discussing recent advances on the following topics: genetic algorithms; evolutionary strategies; the firefly algorithm and cuckoo search; particle swarm optimization and ant colony optimization; simulated annealing; and local search techniques.   This book ...

  16. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander; Mikhalev, Alexander; Serdyukov, Pavel; Gusev, Gleb; Oseledets, Ivan

    2017-01-01

    preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a

  17. Software System for Vocal Rendering of Printed Documents

    Directory of Open Access Journals (Sweden)

    Marian DARDALA

    2008-01-01

    Full Text Available The objective of this paper is to present a software system architecture developed to render the printed documents in a vocal form. On the other hand, in the paper are described the software solutions that exist as software components and are necessary for documents processing as well as for multimedia device controlling used by the system. The usefulness of this system is for people with visual disabilities that can access the contents of documents without that they be printed in Braille system or to exist in an audio form.

  18. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics proposes an SBIR Phase I R/R&D effort to develop a key 3D terrain-rendering technology that provides the basis for successful commercial deployment...

  19. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  20. New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers.

    Science.gov (United States)

    Rodriguez-Ruiz, Alejandro; Teuwen, Jonas; Vreemann, Suzan; Bouwman, Ramona W; van Engen, Ruben E; Karssemeijer, Nico; Mann, Ritse M; Gubern-Merida, Albert; Sechopoulos, Ioannis

    2017-01-01

    Background The image quality of digital breast tomosynthesis (DBT) volumes depends greatly on the reconstruction algorithm. Purpose To compare two DBT reconstruction algorithms used by the Siemens Mammomat Inspiration system, filtered back projection (FBP), and FBP with iterative optimizations (EMPIRE), using qualitative analysis by human readers and detection performance of machine learning algorithms. Material and Methods Visual grading analysis was performed by four readers specialized in breast imaging who scored 100 cases reconstructed with both algorithms (70 lesions). Scoring (5-point scale: 1 = poor to 5 = excellent quality) was performed on presence of noise and artifacts, visualization of skin-line and Cooper's ligaments, contrast, and image quality, and, when present, lesion visibility. In parallel, a three-dimensional deep-learning convolutional neural network (3D-CNN) was trained (n = 259 patients, 51 positives with BI-RADS 3, 4, or 5 calcifications) and tested (n = 46 patients, nine positives), separately with FBP and EMPIRE volumes, to discriminate between samples with and without calcifications. The partial area under the receiver operating characteristic curve (pAUC) of each 3D-CNN was used for comparison. Results EMPIRE reconstructions showed better contrast (3.23 vs. 3.10, P = 0.010), image quality (3.22 vs. 3.03, P algorithm provides DBT volumes with better contrast and image quality, fewer artifacts, and improved visibility of calcifications for human observers, as well as improved detection performance with deep-learning algorithms.

  1. Brain volume measurement using three-dimensional magnetic resonance images

    International Nuclear Information System (INIS)

    Ishimaru, Yoshihiro

    1996-01-01

    This study was designed to validate accurate measurement method of human brain volume using three dimensional (3D) MRI data on a workstation, and to establish optimal correcting method of human brain volume on diagnosis of brain atrophy. 3D MRI data were acquired by fast SPGR sequence using 1.5 T MR imager. 3D MRI data were segmented by region growing method and 3D image was displayed by surface rendering method on the workstation. Brain volume was measured by the volume measurement function of the workstation. In order to validate the accurate measurement method, phantoms and a specimen of human brain were examined. Phantom volume was measured by changing the lower level of threshold value. At the appropriate threshold value, percentage of error of phantoms and the specimen were within 0.6% and 0.08%, respectively. To establish the optimal correcting method, 130 normal volunteers were examined. Brain volumes corrected with height weight, body surface area, and alternative skull volume were evaluated. Brain volume index, which is defined as dividing brain volume by alternative skull volume, had the best correlation with age (r=0.624, p<0.05). No gender differences was observed in brain volume index in contrast to in brain volume. The clinical usefulness of this correcting method for brain atrophy diagnosis was evaluated in 85 patients. Diagnosis by 2D spin echo MR images was compared with brain volume index. Diagnosis of brain atrophy by 2D MR image was concordant with the evaluation by brain volume index. These results indicated that this measurement method had high accuracy, and it was important to set the appropriate threshold value. Brain volume index was the appropriate indication for evaluation of human brain volume, and was considered to be useful for the diagnosis of brain atrophy. (author)

  2. Linear programming algorithms and applications

    CERN Document Server

    Vajda, S

    1981-01-01

    This text is based on a course of about 16 hours lectures to students of mathematics, statistics, and/or operational research. It is intended to introduce readers to the very wide range of applicability of linear programming, covering problems of manage­ ment, administration, transportation and a number of other uses which are mentioned in their context. The emphasis is on numerical algorithms, which are illustrated by examples of such modest size that the solutions can be obtained using pen and paper. It is clear that these methods, if applied to larger problems, can also be carried out on automatic (electronic) computers. Commercially available computer packages are, in fact, mainly based on algorithms explained in this book. The author is convinced that the user of these algorithms ought to be knowledgeable about the underlying theory. Therefore this volume is not merely addressed to the practitioner, but also to the mathematician who is interested in relatively new developments in algebraic theory and in...

  3. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Photometric and Colorimeric Comparison of HDR and Spctrally Resolved Rendering Images

    DEFF Research Database (Denmark)

    Amdemeskel, Mekbib Wubishet; Soreze, Thierry Silvio Claude; Thorseth, Anders

    2017-01-01

    In this paper, we will demonstrate a comparison between measured colorimetric images, and simulated images from a physics based rendering engine. The colorimetric images are high dynamic range (HDR) and taken with a luminance and colour camera mounted on a goniometer. For the comparison, we have ...

  5. 3D-TV Rendering on a Multiprocessor System on a Chip

    NARCIS (Netherlands)

    Van Eijndhoven, J.T.J.; Li, X.

    2006-01-01

    This thesis focuses on the issue of mapping 3D-TV rendering applications to a multiprocessor platform. The target platform aims to address tomorrow's multi-media consumer market. The prototype chip, called Wasabi, contains a set of TriMedia processors that communicate viaa shared memory, fast

  6. Iso-surface volume rendering for implant surgery

    NARCIS (Netherlands)

    van Foreest-Timp, Sheila; Lemke, H.U.; Inamura, K.; Doi, K.; Vannier, M.W.; Farman, A.G.

    2001-01-01

    Many clinical situations ask for the simultaneous visualization of anatomical surfaces and synthetic meshes. Common examples include hip replacement surgery, intra-operative visualization of surgical instruments or probes, visualization of planning information, or implant surgery. To be useful for

  7. Interactive Volume Rendering of Diffusion Tensor Data

    Energy Technology Data Exchange (ETDEWEB)

    Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred; Carmichael, Owen; Hamann, Bernd; Scheuermann, Gerik

    2007-03-30

    As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal was to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].

  8. A GPU implementation of a track-repeating algorithm for proton radiotherapy dose calculations

    International Nuclear Information System (INIS)

    Yepes, Pablo P; Mirkovic, Dragan; Taddei, Phillip J

    2010-01-01

    An essential component in proton radiotherapy is the algorithm to calculate the radiation dose to be delivered to the patient. The most common dose algorithms are fast but they are approximate analytical approaches. However their level of accuracy is not always satisfactory, especially for heterogeneous anatomical areas, like the thorax. Monte Carlo techniques provide superior accuracy; however, they often require large computation resources, which render them impractical for routine clinical use. Track-repeating algorithms, for example the fast dose calculator, have shown promise for achieving the accuracy of Monte Carlo simulations for proton radiotherapy dose calculations in a fraction of the computation time. We report on the implementation of the fast dose calculator for proton radiotherapy on a card equipped with graphics processor units (GPUs) rather than on a central processing unit architecture. This implementation reproduces the full Monte Carlo and CPU-based track-repeating dose calculations within 2%, while achieving a statistical uncertainty of 2% in less than 1 min utilizing one single GPU card, which should allow real-time accurate dose calculations.

  9. A volume of fluid method based on multidimensional advection and spline interface reconstruction

    International Nuclear Information System (INIS)

    Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.

    2004-01-01

    A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps

  10. SAPHIRE 8 Volume 2 - Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood; W. J. Galyean; J. A. Schroeder; M. B. Sattison

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). Herein information is provided on the principles used in the construction and operation of Version 8.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, Workspace algorithms, cut set "recovery," end state manipulation, and use of "compound events."

  11. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  12. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    Science.gov (United States)

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and

  13. USER EVALUATION OF EIGHT LED LIGHT SOURCES WITH DIFFERENTSPECIAL COLOUR RENDERING INDICES R9

    DEFF Research Database (Denmark)

    Markvart, Jakob; Iversen, Anne; Logadóttir, Ásta

    2013-01-01

    In this study we evaluated the influence of the special colour rendering index R9 on subjective red colour perception and Caucasian skin appearance among untrained test subjects. The light sources tested are commercially available LED based light sources with similar correlated colour temperature...... and general colour rendering index, but with varying R9. It was found that the test subjects in general are more positive towards light sources with higher R9. The shift from a majority of negative responses to a majority of positive responses is found to occur at R9 values of ~20....

  14. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    Science.gov (United States)

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  15. An Algorithm for Real-Time Pulse Waveform Segmentation and Artifact Detection in Photoplethysmograms.

    Science.gov (United States)

    Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas

    2017-03-01

    Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.

  16. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  17. Conservation of the historical render in the Church of Nossa Senhora da Assunção in Elvas

    Directory of Open Access Journals (Sweden)

    Sofia Salema

    2008-01-01

    Full Text Available In this paper we present a practical case of conservation of the historical renders in the pyramidal tower of the Church of Nossa Senhora da Assunção in Elvas (Portugal, carried out by the former IPPAR (Portuguese Institute of Architectonic Heritage, now Regional Direction of Culture of Alentejo. Awareness of the value and of the risks facing these renders points towards the necessity to safeguard their material authenticity. During the works of conservation of the main façade, under the layers of non decorated recovering render, a previous decorated render, simulating stone masonry, with raised joints reproducing stone divisions and the internal structure in solid brick, was discovered. After material and historical analysis we came to the conclusion that it was highly probable that this render was contemporary to the construction of the Church and, as such, it seemed essential to conserve and restore this covering as historical evidence and cultural heritage. Treatment of the pyramidal tower render included removal of the non original recover mortars, survey of ancient materials, execution of technical and decorative scheme, surfaces' cleaning and consolidation of the weaker original old mortars. In order to fill the gaps in the original surface, specific lime mortars, prepared with washed sand and standard grain size, were used. Restoration techniques were used to reconstitute and integrate the lacunas. These actions not only conserved the workmanship, but also reconstructed the decorative structure and a reading clarity, allowing the identification of restoration without the connotation of a mimetic integration. This joint action, only possible with the help of the conservation and restoration team, puts into evidence the possibility of continuous evaluation and learning. It is clear that, in cases where there are unknown, unpredictable factors, due to the specific work and value of the materials, it is possible to change the course

  18. A two-metric proposal to specify the color-rendering properties of light sources for retail lighting

    Science.gov (United States)

    Freyssinier, Jean Paul; Rea, Mark

    2010-08-01

    Lighting plays an important role in supporting retail operations, from attracting customers, to enabling the evaluation of merchandise, to facilitating the completion of the sale. Lighting also contributes to the identity, comfort, and visual quality of a retail store. With the increasing availability and quality of white LEDs, retail lighting specifiers are now considering LED lighting in stores. The color rendering of light sources is a key factor in supporting retail lighting goals and thus influences a light source's acceptance by users and specifiers. However, there is limited information on what consumers' color preferences are, and metrics used to describe the color properties of light sources often are equivocal and fail to predict preference. The color rendering of light sources is described in the industry solely by the color rendering index (CRI), which is only indirectly related to human perception. CRI is intended to characterize the appearance of objects illuminated by the source and is increasingly being challenged because new sources are being developed with increasingly exotic spectral power distributions. This paper discusses how CRI might be augmented to better use it in support of the design objectives for retail merchandising. The proposed guidelines include the use of gamut area index as a complementary metric to CRI for assuring good color rendering.

  19. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  20. Continuous Surface Rendering, Passing from CAD to Physical Representation

    Directory of Open Access Journals (Sweden)

    Mario Covarrubias

    2013-06-01

    Full Text Available This paper describes a desktop-mechatronic interface that has been conceived to support designers in the evaluation of aesthetic virtual shapes. This device allows a continuous and smooth free hand contact interaction on a real and developable plastic tape actuated by a servo-controlled mechanism. The objective in designing this device is to reproduce a virtual surface with a consistent physical rendering well adapted to designers' needs. The desktop-mechatronic interface consists in a servo-actuated plastic strip that has been devised and implemented using seven interpolation points. In fact, by using the MEC (Minimal Energy Curve Spline approach, a developable real surface is rendered taking into account the CAD geometry of the virtual shapes. In this paper, we describe the working principles of the interface by using both absolute and relative approaches to control the position on each single control point on the MEC spline. Then, we describe the methodology that has been implemented, passing from the CAD geometry, linked to VisualNastran in order to maintain the parametric properties of the virtual shape. Then, we present the co-simulation between VisualNastran and MATLAB/Simulink used for achieving this goal and controlling the system and finally, we present the results of the subsequent testing session specifically carried out to evaluate the accuracy and the effectiveness of the mechatronic device.

  1. Photon Differential Splatting for Rendering Caustics

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Schjøth, Lars; Erleben, Kenny

    2014-01-01

    We present a photon splatting technique which reduces noise and blur in the rendering of caustics. Blurring of illumination edges is an inherent problem in photon splatting, as each photon is unaware of its neighbours when being splatted. This means that the splat size is usually based...... on heuristics rather than knowledge of the local flux density. We use photon differentials to determine the size and shape of the splats such that we achieve adaptive anisotropic flux density estimation in photon splatting. As compared to previous work that uses photon differentials, we present the first method...... where no photons or beams or differentials need to be stored in a map. We also present improvements in the theory of photon differentials, which give more accurate results and a faster implementation. Our technique has good potential for GPU acceleration, and we limit the number of parameters requiring...

  2. A solution algorithm for fluid-particle flows across all flow regimes

    Science.gov (United States)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  3. Medical review practices for driver licensing volume 2: case studies of medical referrals and licensing outcomes in Maine, Ohio, Oregon, Texas, Washington, and Wisconsin.

    Science.gov (United States)

    2017-03-01

    This is the second of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically at-risk : drivers. This volume pre...

  4. Scan-based volume animation driven by locally adaptive articulated registrations.

    Science.gov (United States)

    Rhee, Taehyun; Lewis, J P; Neumann, Ulrich; Nayak, Krishna S

    2011-03-01

    This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries. © 2011 IEEE

  5. Selection and determination of beam weights based on genetic algorithms for conformal radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Zunliang Wang

    2000-01-01

    A genetic algorithm has been used to optimize the selection of beam weights for external beam three-dimensional conformal radiotherapy treatment planning. A fitness function is defined, which includes a difference function to achieve a least-square fit to doses at preselected points in a planning target volume, and a penalty item to constrain the maximum allowable doses delivered to critical organs. Adjustment between the dose uniformity within the target volume and the dose constraint to the critical structures can be achieved by varying the beam weight variables in the fitness function. A floating-point encoding schema and several operators, like uniform crossover, arithmetical crossover, geometrical crossover, Gaussian mutation and uniform mutation, have been used to evolve the population. Three different cases were used to verify the correctness of the algorithm and quality assessment based on dose-volume histograms and three-dimensional dose distributions were given. The results indicate that the genetic algorithm presented here has considerable potential. (author)

  6. High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery.

    Science.gov (United States)

    Chan, Sonny; Li, Peter; Locketz, Garrett; Salisbury, Kenneth; Blevins, Nikolas H

    2016-12-01

    Medical imaging techniques provide a wealth of information for surgical preparation, but it is still often the case that surgeons are examining three-dimensional pre-operative image data as a series of two-dimensional images. With recent advances in visual computing and interactive technologies, there is much opportunity to provide surgeons an ability to actively manipulate and interpret digital image data in a surgically meaningful way. This article describes the design and initial evaluation of a virtual surgical environment that supports patient-specific simulation of temporal bone surgery using pre-operative medical image data. Computational methods are presented that enable six degree-of-freedom haptic feedback during manipulation, and that simulate virtual dissection according to the mechanical principles of orthogonal cutting and abrasive wear. A highly efficient direct volume renderer simultaneously provides high-fidelity visual feedback during surgical manipulation of the virtual anatomy. The resulting virtual surgical environment was assessed by evaluating its ability to replicate findings in the operating room, using pre-operative imaging of the same patient. Correspondences between surgical exposure, anatomical features, and the locations of pathology were readily observed when comparing intra-operative video with the simulation, indicating the predictive ability of the virtual surgical environment.

  7. Inadequate increase in the volume of major epicardial coronary arteries compared with that in left ventricular mass. Novel concept for characterization of coronary arteries using 64-slice computed tomography.

    Science.gov (United States)

    Ehara, Shoichi; Okuyama, Takuhiro; Shirai, Nobuyuki; Sugioka, Kenichi; Oe, Hiroki; Itoh, Toshihide; Matsuoka, Toshiyuki; Ikura, Yoshihiro; Ueda, Makiko; Naruko, Takahiko; Hozumi, Takeshi; Yoshiyama, Minoru

    2009-08-01

    Previous studies have shown a correlation between coronary artery cross-sectional diameter and left ventricular (LV) mass. However, no studies have examined the correlation between actual coronary artery volume (CAV) and LV mass. In the present study, measurements of CAV by 64-multislice computed tomography (MSCT) were validated and the relationship between CAV and LV mass was investigated. First, coronary artery phantoms consisting of syringes filled with solutions of contrast medium moving at simulated heart rates were scanned by 64-MSCT. Display window settings permitting accurate calculation of small volumes were optimized by evaluating volume-rendered images of the segmented contrast medium at different window settings. Next, 61 patients without significant coronary artery stenosis were scanned by 64-MSCT with the same protocol as for the phantoms. Coronary arteries were segmented on a workstation and the same window settings were applied to the volume-rendered images to calculate total CAV. Significant correlations between total CAV and LV mass (r=0.660, Pconcept of "CAV" for the characterization of coronary arteries may prove useful for future research, particularly on the causes of LV hypertrophy.

  8. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  9. SU-C-BRA-05: Delineating High-Dose Clinical Target Volumes for Head and Neck Tumors Using Machine Learning Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, C [Department of Radiation Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); The University of Texas Graduate School of Biomedical Sciences, Houston, TX (United States); Wong, A [Department of Radiation Oncology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); School of Medicine, The University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States); Mohamed, A; Fuller, C [Department of Radiation Oncology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); Yang, J; Court, L; Aristophanous, M [Department of Radiation Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); Rao, A [Department of Bioinformatics and Computational Biology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To develop and test population-based machine learning algorithms for delineating high-dose clinical target volumes (CTVs) in H&N tumors. Automating and standardizing the contouring of CTVs can reduce both physician contouring time and inter-physician variability, which is one of the largest sources of uncertainty in H&N radiotherapy. Methods: Twenty-five node-negative patients treated with definitive radiotherapy were selected (6 right base of tongue, 11 left and 9 right tonsil). All patients had GTV and CTVs manually contoured by an experienced radiation oncologist prior to treatment. This contouring process, which is driven by anatomical, pathological, and patient specific information, typically results in non-uniform margin expansions about the GTV. Therefore, we tested two methods to delineate high-dose CTV given a manually-contoured GTV: (1) regression-support vector machines(SVM) and (2) classification-SVM. These models were trained and tested on each patient group using leave-one-out cross-validation. The volume difference(VD) and Dice similarity coefficient(DSC) between the manual and auto-contoured CTV were calculated to evaluate the results. Distances from GTV-to-CTV were computed about each patient’s GTV and these distances, in addition to distances from GTV to surrounding anatomy in the expansion direction, were utilized in the regression-SVM method. The classification-SVM method used categorical voxel-information (GTV, selected anatomical structures, else) from a 3×3×3cm3 ROI centered about the voxel to classify voxels as CTV. Results: Volumes for the auto-contoured CTVs ranged from 17.1 to 149.1cc and 17.4 to 151.9cc; the average(range) VD between manual and auto-contoured CTV were 0.93 (0.48–1.59) and 1.16(0.48–1.97); while average(range) DSC values were 0.75(0.59–0.88) and 0.74(0.59–0.81) for the regression-SVM and classification-SVM methods, respectively. Conclusion: We developed two novel machine learning methods to delineate

  10. SU-C-BRA-05: Delineating High-Dose Clinical Target Volumes for Head and Neck Tumors Using Machine Learning Algorithms

    International Nuclear Information System (INIS)

    Cardenas, C; Wong, A; Mohamed, A; Fuller, C; Yang, J; Court, L; Aristophanous, M; Rao, A

    2016-01-01

    Purpose: To develop and test population-based machine learning algorithms for delineating high-dose clinical target volumes (CTVs) in H&N tumors. Automating and standardizing the contouring of CTVs can reduce both physician contouring time and inter-physician variability, which is one of the largest sources of uncertainty in H&N radiotherapy. Methods: Twenty-five node-negative patients treated with definitive radiotherapy were selected (6 right base of tongue, 11 left and 9 right tonsil). All patients had GTV and CTVs manually contoured by an experienced radiation oncologist prior to treatment. This contouring process, which is driven by anatomical, pathological, and patient specific information, typically results in non-uniform margin expansions about the GTV. Therefore, we tested two methods to delineate high-dose CTV given a manually-contoured GTV: (1) regression-support vector machines(SVM) and (2) classification-SVM. These models were trained and tested on each patient group using leave-one-out cross-validation. The volume difference(VD) and Dice similarity coefficient(DSC) between the manual and auto-contoured CTV were calculated to evaluate the results. Distances from GTV-to-CTV were computed about each patient’s GTV and these distances, in addition to distances from GTV to surrounding anatomy in the expansion direction, were utilized in the regression-SVM method. The classification-SVM method used categorical voxel-information (GTV, selected anatomical structures, else) from a 3×3×3cm3 ROI centered about the voxel to classify voxels as CTV. Results: Volumes for the auto-contoured CTVs ranged from 17.1 to 149.1cc and 17.4 to 151.9cc; the average(range) VD between manual and auto-contoured CTV were 0.93 (0.48–1.59) and 1.16(0.48–1.97); while average(range) DSC values were 0.75(0.59–0.88) and 0.74(0.59–0.81) for the regression-SVM and classification-SVM methods, respectively. Conclusion: We developed two novel machine learning methods to delineate

  11. Local intelligent electronic device (IED) rendering templates over limited bandwidth communication link to manage remote IED

    Science.gov (United States)

    Bradetich, Ryan; Dearien, Jason A; Grussling, Barry Jakob; Remaley, Gavin

    2013-11-05

    The present disclosure provides systems and methods for remote device management. According to various embodiments, a local intelligent electronic device (IED) may be in communication with a remote IED via a limited bandwidth communication link, such as a serial link. The limited bandwidth communication link may not support traditional remote management interfaces. According to one embodiment, a local IED may present an operator with a management interface for a remote IED by rendering locally stored templates. The local IED may render the locally stored templates using sparse data obtained from the remote IED. According to various embodiments, the management interface may be a web client interface and/or an HTML interface. The bandwidth required to present a remote management interface may be significantly reduced by rendering locally stored templates rather than requesting an entire management interface from the remote IED. According to various embodiments, an IED may comprise an encryption transceiver.

  12. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    Science.gov (United States)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  13. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  14. INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF

    Science.gov (United States)

    HERSHFIELD, HAL E.; GOLDSTEIN, DANIEL G.; SHARPE, WILLIAM F.; FOX, JESSE; YEYKELIS, LEO; CARSTENSEN, LAURA L.; BAILENSON, JEREMY N.

    2014-01-01

    Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones. PMID:24634544

  15. INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF.

    Science.gov (United States)

    Hershfield, Hal E; Goldstein, Daniel G; Sharpe, William F; Fox, Jesse; Yeykelis, Leo; Carstensen, Laura L; Bailenson, Jeremy N

    2011-11-01

    Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.

  16. Technical Report Series on Global Modeling and Data Assimilation. Volume 12; Comparison of Satellite Global Rainfall Algorithms

    Science.gov (United States)

    Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.

    1997-01-01

    Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.

  17. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    Science.gov (United States)

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  18. Iterative volume morphing and learning for mobile tumor based on 4DCT.

    Science.gov (United States)

    Mao, Songan; Wu, Huanmei; Sandison, George; Fang, Shiaofen

    2017-02-21

    During image-guided cancer radiation treatment, three-dimensional (3D) tumor volumetric information is important for treatment success. However, it is typically not feasible to image a patient's 3D tumor continuously in real time during treatment due to concern over excessive patient radiation dose. We present a new iterative morphing algorithm to predict the real-time 3D tumor volume based on time-resolved computed tomography (4DCT) acquired before treatment. An offline iterative learning process has been designed to derive a target volumetric deformation function from one breathing phase to another. Real-time volumetric prediction is performed to derive the target 3D volume during treatment delivery. The proposed iterative deformable approach for tumor volume morphing and prediction based on 4DCT is innovative because it makes three major contributions: (1) a novel approach to landmark selection on 3D tumor surfaces using a minimum bounding box; (2) an iterative morphing algorithm to generate the 3D tumor volume using mapped landmarks; and (3) an online tumor volume prediction strategy based on previously trained deformation functions utilizing 4DCT. The experimental performance showed that the maximum morphing deviations are 0.27% and 1.25% for original patient data and artificially generated data, which is promising. This newly developed algorithm and implementation will have important applications for treatment planning, dose calculation and treatment validation in cancer radiation treatment.

  19. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  20. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    Science.gov (United States)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  1. Towards the confirmation of QCD on the lattice. Improved actions and algorithms

    International Nuclear Information System (INIS)

    Krieg, Stefan F.

    2009-01-01

    Lattice Quantum Chromodynamics has made tremendous progress over the last decade. New and improved simulation algorithms and lattice actions enable simulations of the theory with unprecedented accuracy. In the first part of this thesis, novel simulation algorithms for dynamical overlap fermions are presented. The generic Hybrid Monte Carlo algorithm is adapted to treat the singularity in the Molecular Dynamics force, to increase the tunneling rate between different topological sectors and to improve the overall volume scaling of the combined algorithm. With this new method, simulations with dynamical overlap fermions can reach smaller lattice spacings, larger volumes, smaller quark masses, and therefore higher precision than had previously been possible. The second part of this thesis is focused on a large scale simulation aiming to compute the light hadron mass spectrum. This simulation is based on a tree-level Symanzik improved gauge and tree-level improved stout-smeared Wilson clover action. The efficiency of the combination of this action and the improved simulation algorithms used allows to completely control all systematic errors. Therefore, this simulation provides a highly accurate ab initio calculation of the masses of the light hadrons, such as the proton, responsible for 95% of the mass of the visible universe, and confirms Lattice QCD in the light hadron sector. (orig.)

  2. Towards the confirmation of QCD on the lattice. Improved actions and algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Krieg, Stefan F.

    2009-07-01

    Lattice Quantum Chromodynamics has made tremendous progress over the last decade. New and improved simulation algorithms and lattice actions enable simulations of the theory with unprecedented accuracy. In the first part of this thesis, novel simulation algorithms for dynamical overlap fermions are presented. The generic Hybrid Monte Carlo algorithm is adapted to treat the singularity in the Molecular Dynamics force, to increase the tunneling rate between different topological sectors and to improve the overall volume scaling of the combined algorithm. With this new method, simulations with dynamical overlap fermions can reach smaller lattice spacings, larger volumes, smaller quark masses, and therefore higher precision than had previously been possible. The second part of this thesis is focused on a large scale simulation aiming to compute the light hadron mass spectrum. This simulation is based on a tree-level Symanzik improved gauge and tree-level improved stout-smeared Wilson clover action. The efficiency of the combination of this action and the improved simulation algorithms used allows to completely control all systematic errors. Therefore, this simulation provides a highly accurate ab initio calculation of the masses of the light hadrons, such as the proton, responsible for 95% of the mass of the visible universe, and confirms Lattice QCD in the light hadron sector. (orig.)

  3. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors.

    Directory of Open Access Journals (Sweden)

    Mingyue Luo

    Full Text Available To evaluate the diagnostic value of multidetector CT (MDCT and its multiplanar reformation (MPR, volume rendering (VR and virtual bronchoscopy (VB postprocessing techniques for primary trachea and main bronchus tumors.Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology.Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6, tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16, extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm-56.0 mm, n = 14; no clear border with right atelectasis, n = 1, longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm-68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1, morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1 and extents (mild, n = 5; moderate, n = 7; severe, n = 19, distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10, and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density

  4. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors.

    Science.gov (United States)

    Luo, Mingyue; Duan, Chaijie; Qiu, Jianping; Li, Wenru; Zhu, Dongyun; Cai, Wenli

    2015-01-01

    To evaluate the diagnostic value of multidetector CT (MDCT) and its multiplanar reformation (MPR), volume rendering (VR) and virtual bronchoscopy (VB) postprocessing techniques for primary trachea and main bronchus tumors. Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology. Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6), tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16), extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm-56.0 mm, n = 14; no clear border with right atelectasis, n = 1), longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm-68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1), morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1) and extents (mild, n = 5; moderate, n = 7; severe, n = 19), distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10), and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density without

  5. A Genetic Algorithm Approach to the Optimization of a Radioactive Waste Treatment System

    International Nuclear Information System (INIS)

    Yang, Yeongjin; Lee, Kunjai; Koh, Y.; Mun, J.H.; Kim, H.S.

    1998-01-01

    This study is concerned with the applications of goal programming and genetic algorithm techniques to the analysis of management and operational problems in the radioactive waste treatment system (RWTS). A typical RWTS is modeled and solved by goal program and genetic algorithm to study and resolve the effects of conflicting objectives such as cost, limitation of released radioactivity to the environment, equipment utilization and total treatable radioactive waste volume before discharge and disposal. The developed model is validated and verified using actual data obtained from the RWTS at Kyoto University in Japan. The solution by goal programming and genetic algorithm would show the optimal operation point which is to maximize the total treatable radioactive waste volume and minimize the released radioactivity of liquid waste even under the restricted resources. The comparison of two methods shows very similar results. (author)

  6. Energy management algorithm for an optimum control of a photovoltaic water pumping system

    International Nuclear Information System (INIS)

    Sallem, Souhir; Chaabene, Maher; Kamoun, M.B.A.

    2009-01-01

    The effectiveness of photovoltaic water pumping systems depends on the adequacy between the generated energy and the volume of pumped water. This paper presents an intelligent algorithm which makes decision on the interconnection modes and instants of photovoltaic installation components: battery, water pump and photovoltaic panel. The decision is made by fuzzy rules on the basis of the Photovoltaic Panel Generation (PVPG) forecast during a considered day, on the load required power, and by considering the battery safety. The algorithm aims to extend operation time of the water pump by controlling a switching unit which links the system components with respect to multi objective management criteria. The algorithm implementation demonstrates that the approach extends the pumping period for more than 5 h a day which gives a mean daily improvement of 97% of the water pumped volume.

  7. Moisture transport properties of brick – comparison of exposed, impregnated and rendered brick

    DEFF Research Database (Denmark)

    Hansen, Tessa Kvist; Bjarløv, Søren Peter; Peuhkuri, Ruut

    2016-01-01

    In regards to internal insulation of preservation worthy brick façades, external moisture sources, such as wind-driven rain exposure, inevitably has an impact on moisture conditions within the masonry construction. Surface treatments, such as hydrophobation or render, may remedy the impacts...... of external moisture. In the present paper the surface absorption of liquid water on masonry façades of untreated, hydrophobated and rendered brick, are determined experimentally and compared. The experimental work focuses on methods that can be applied on-site, Karsten tube measurements. These measurements...... are supplemented with results from laboratory measurements of water absorption coefficient by partial immersion. Based on obtained measurement results, simulations are made with external liquid water loads for determination of moisture conditions within the masonry of different surface treatments. Experimental...

  8. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  9. Evaluating the agreement between tumour volumetry and the estimated volumes of tumour lesions using an algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Laubender, Ruediger P. [German Cancer Consortium (DKTK), Heidelberg (Germany); University Hospital Munich - Campus Grosshadern, Institute of Medical Informatics, Biometry, and Epidemiology (IBE), Munich (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Lynghjem, Julia; D' Anastasi, Melvin; Graser, Anno [University Hospital Munich - Campus Grosshadern, Institute for Clinical Radiology, Munich (Germany); Heinemann, Volker; Modest, Dominik P. [University Hospital Munich - Campus Grosshadern, Department of Medical Oncology, Munich (Germany); Mansmann, Ulrich R. [University Hospital Munich - Campus Grosshadern, Institute of Medical Informatics, Biometry, and Epidemiology (IBE), Munich (Germany); Sartorius, Ute; Schlichting, Michael [Merck KGaA, Darmstadt (Germany)

    2014-07-15

    To evaluate the agreement between tumour volume derived from semiautomated volumetry (SaV) and tumor volume defined by spherical volume using longest lesion diameter (LD) according to Response Evaluation Criteria In Solid Tumors (RECIST) or ellipsoid volume using LD and longest orthogonal diameter (LOD) according to World Health Organization (WHO) criteria. Twenty patients with metastatic colorectal cancer from the CIOX trial were included. A total of 151 target lesions were defined by baseline computed tomography and followed until disease progression. All assessments were performed by a single reader. A variance component model was used to compare the three volume versions. There was a significant difference between the SaV and RECIST-based tumour volumes. The same model showed no significant difference between the SaV and WHO-based volumes. Scatter plots showed that the RECIST-based volumes overestimate lesion volume. The agreement between the SaV and WHO-based relative changes in tumour volume, evaluated by intraclass correlation, showed nearly perfect agreement. Estimating the volume of metastatic lesions using both the LD and LOD (WHO) is more accurate than those based on LD only (RECIST), which overestimates lesion volume. The good agreement between the SaV and WHO-based relative changes in tumour volume enables a reasonable approximation of three-dimensional tumour burden. (orig.)

  10. A fast, robust algorithm for power line interference cancellation in neural recording

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

  11. Dose calculations algorithm for narrow heavy charged-particle beams

    Energy Technology Data Exchange (ETDEWEB)

    Barna, E A; Kappas, C [Department of Medical Physics, School of Medicine, University of Patras (Greece); Scarlat, F [National Institute for Laser and Plasma Physics, Bucharest (Romania)

    1999-12-31

    The dose distributional advantages of the heavy charged-particles can be fully exploited by using very efficient and accurate dose calculation algorithms, which can generate optimal three-dimensional scanning patterns. An inverse therapy planning algorithm for dynamically scanned, narrow heavy charged-particle beams is presented in this paper. The irradiation `start point` is defined at the distal end of the target volume, right-down, in a beam`s eye view. The peak-dose of the first elementary beam is set to be equal to the prescribed dose in the target volume, and is defined as the reference dose. The weighting factor of any Bragg-peak is determined by the residual dose at the point of irradiation, calculated as the difference between the reference dose and the cumulative dose delivered at that point of irradiation by all the previous Bragg-peaks. The final pattern consists of the weighted Bragg-peaks irradiation density. Dose distributions were computed using two different scanning steps equal to 0.5 mm, and 1 mm respectively. Very accurate and precise localized dose distributions, conform to the target volume, were obtained. (authors) 6 refs., 3 figs.

  12. Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm

    Science.gov (United States)

    Wang, Zhen; Kudo, Hiroyuki

    2012-03-01

    This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.

  13. Apparatus for rendering at least a portion of a device inoperable and related methods

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Michael A.; Steffler, Eric D.; Hartenstein, Steven D.; Wallace, Ronald S.

    2016-11-08

    Apparatus for rendering at least a portion of a device inoperable may include a containment structure having a first compartment that is configured to receive a device therein and a movable member configured to receive a cartridge having reactant material therein. The movable member is configured to be inserted into the first compartment of the containment structure and to ignite the reactant material within the cartridge. Methods of rendering at least a portion of a device inoperable may include disposing the device into the first compartment of the containment structure, inserting the movable member into the first compartment of the containment structure, igniting the reactant material in the cartridge, and expelling molten metal onto the device.

  14. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  15. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    Science.gov (United States)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  16. Waste-aware fluid volume assignment for flow-based microfluidic biochips

    DEFF Research Database (Denmark)

    Schneider, Alexander Rüdiger; Pop, Paul; Madsen, Jan

    2017-01-01

    complex Fluidic Units (FUs) such as switches, micropumps, mixers and separators can be constructed. When running a biochemical application on a FBMB, fluid volumes are dispensed from input reservoirs and used by the FUs. Given a biochemical application and a biochip, we are interested in determining...... the fluid volume assignment for each operation of the application, such that the FUs volume requirements are satisfied, while over- and underflow are avoided and the total volume of fluid used is minimized. We propose an algorithm for this fluid assignment problem. Compared to previous work, our method...

  17. Semantics by analogy for illustrative volume visualization☆

    Science.gov (United States)

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard

    2012-01-01

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827

  18. CVFEM for Multiphase Flow with Disperse and Interface Tracking, and Algorithms Performances

    Directory of Open Access Journals (Sweden)

    M. Milanez

    2015-12-01

    Full Text Available A Control-Volume Finite-Element Method (CVFEM is newly formulated within Eulerian and spatial averaging frameworks for effective simulation of disperse transport, deposit distribution and interface tracking. Their algorithms are implemented alongside an existing continuous phase algorithm. Flow terms are newly implemented for a control volume (CV fixed in a space, and the CVs' equations are assembled based on a finite element method (FEM. Upon impacting stationary and moving boundaries, the disperse phase changes its phase and the solver triggers identification of CVs with excess deposit and their neighboring CVs for its accommodation in front of an interface. The solver then updates boundary conditions on the moving interface as well as domain conditions on the accumulating deposit. Corroboration of the algorithms' performances is conducted on illustrative simulations with novel and existing Eulerian and Lagrangian solutions, such as (- other, i. e. external methods with analytical and physical experimental formulations, and (- characteristics internal to CVFEM.

  19. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander

    2017-02-07

    Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.

  20. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.