WorldWideScience

Sample records for volume rendering algorithm

  1. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  2. Technical analysis of volume-rendering algorithms: application in low-contrast structures using liver vascularisation as a model

    International Nuclear Information System (INIS)

    Cademartiri, Filippo; Luccichenti, Giacomo; Runza, Giuseppe; Bartolotta, Tommaso Vincenzo; Midiri, Massimo; Gualerzi, Massimo; Brambilla, Lorenzo; Coruzzi, Paolo; Soliani, Paolo; Sianesi, Mario

    2005-01-01

    Purpose: To assess the influence of pre-set volume rendering opacity curves (OC) on image quality and to identify which absolute parameters (density of aorta, hepatic parenchyma and portal vein) affect visualization of portal vascular structures (low-contrast structures). Materials and methods: Twenty-two patients underwent a dual-phase spiral CT with the following parameters: collimation 3 mm, pitch 2, increment 1 mm. Three scans were performed: one without contrast medium and the latter two after the injection of contrast material (conventionally identified as 'arterial' and 'portal'). The images were sent to a workstation running on an NT platform equipped with post-processing software allowing three-dimensional (3D) reconstructions to generate volume-rendered images of the vascular supply to the liver. Correlation between the absolute values of aorta, liver and portal vein density, OC parameters, and image quality were assessed. Results: 3D images generated using pre-set OC obtained a much mower overall quality score than those produced with OC set by the operator. High contrast between the liver and the portal vein, for example during the portal vascular phase, allows wider windows, thus improving image quality. Conversely, the OC in the parenchymal phase scans must have a high gradient in order to better differentiate between the vascular structures and the surrounding hepatic parenchyma. Conclusions: Image features considered to be of interest by the operator cannot be simplified by the mean of pre-set OC. Due to their strong individual variability automatic 3D algorithms cannot be universally applied: they should be adapted to both image and patient characteristics [it

  3. Technical analysis of volume-rendering algorithms: application in low-contrast structures using liver vascularisation as a model; Analisi tecnica degli algoritmi di volume rendering: applicazione alle strutture a basso contrsto usando come modello la vascolarizzazione epatica

    Energy Technology Data Exchange (ETDEWEB)

    Cademartiri, Filippo [Erasmus Medical Center, Rotterdam (Netherlands); Luccichenti, Giacomo [Fondazione Biomedica Europea ONLUS, Roma (Italy); Runza, Giuseppe; Bartolotta, Tommaso Vincenzo; Midiri, Massimo [Palermo Univ., Palermo (Italy). Sezione di scienze radiologiche; Gualerzi, Massimo; Brambilla, Lorenzo; Coruzzi, Paolo [Parma Univ., Parma (Italy). UO di prevenzione e riabilitazione vascolare, Fondazione Don C. Gnocchi ONLUS; Soliani, Paolo; Sianesi, Mario [Parma Univ., Parma (Italy). Dipartimento di chirurgia

    2005-04-01

    Purpose: To assess the influence of pre-set volume rendering opacity curves (OC) on image quality and to identify which absolute parameters (density of aorta, hepatic parenchyma and portal vein) affect visualization of portal vascular structures (low-contrast structures). Materials and methods: Twenty-two patients underwent a dual-phase spiral CT with the following parameters: collimation 3 mm, pitch 2, increment 1 mm. Three scans were performed: one without contrast medium and the latter two after the injection of contrast material (conventionally identified as 'arterial' and 'portal'). The images were sent to a workstation running on an NT platform equipped with post-processing software allowing three-dimensional (3D) reconstructions to generate volume-rendered images of the vascular supply to the liver. Correlation between the absolute values of aorta, liver and portal vein density, OC parameters, and image quality were assessed. Results: 3D images generated using pre-set OC obtained a much mower overall quality score than those produced with OC set by the operator. High contrast between the liver and the portal vein, for example during the portal vascular phase, allows wider windows, thus improving image quality. Conversely, the OC in the parenchymal phase scans must have a high gradient in order to better differentiate between the vascular structures and the surrounding hepatic parenchyma. Conclusions: Image features considered to be of interest by the operator cannot be simplified by the mean of pre-set OC. Due to their strong individual variability automatic 3D algorithms cannot be universally applied: they should be adapted to both image and patient characteristics. [Italian] Scopo: Valutare l'influenza delle curve di opacit� (CO) preimpostate del volume-rendering sulla qualit� delle immagini, ed identificare quali parametri assoluti (attenzione dell'aorta, del parenchima epatico e della vena porta) influenzano la

  4. Local and Global Illumination in the Volume Rendering Integral

    Energy Technology Data Exchange (ETDEWEB)

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  5. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus

    2014-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

  6. Haptic rendering foundations, algorithms, and applications

    CERN Document Server

    Lin, Ming C

    2008-01-01

    For a long time, human beings have dreamed of a virtual world where it is possible to interact with synthetic entities as if they were real. It has been shown that the ability to touch virtual objects increases the sense of presence in virtual environments. This book provides an authoritative overview of state-of-theart haptic rendering algorithms and their applications. The authors examine various approaches and techniques for designing touch-enabled interfaces for a number of applications, including medical training, model design, and maintainability analysis for virtual prototyping, scienti

  7. Remote volume rendering pipeline for mHealth applications

    Science.gov (United States)

    Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald

    2014-03-01

    We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.

  8. Lighting design for globally illuminated volume rendering.

    Science.gov (United States)

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  9. Immersive volume rendering of blood vessels

    Science.gov (United States)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  10. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  11. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  12. Fast algorithm for the rendering of three-dimensional surfaces

    Science.gov (United States)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  13. Efficient visibility encoding for dynamic illumination in direct volume rendering.

    Science.gov (United States)

    Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas

    2012-03-01

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.

  14. Transform coding for hardware-accelerated volume rendering.

    Science.gov (United States)

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  15. Real-time volume rendering of digital medical images on an iOS device

    Science.gov (United States)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  16. Anisotropic 3D texture synthesis with application to volume rendering

    DEFF Research Database (Denmark)

    Laursen, Lasse Farnung; Ersbøll, Bjarne Kjær; Bærentzen, Jakob Andreas

    2011-01-01

    images using a 12.1 megapixel camera. Next, we extend the volume rendering pipeline by creating a transfer function which yields not only color and opacity from the input intensity, but also texture coordinates for our synthesized 3D texture. Thus, we add texture to the volume rendered images....... This method is applied to a high quality visualization of a pig carcass, where samples of meat, bone, and fat have been used to produce the anisotropic 3D textures....

  17. Graphical User Interfaces for Volume Rendering Applications in Medical Imaging

    OpenAIRE

    Lindfors, Lisa; Lindmark, Hanna

    2002-01-01

    Volume rendering applications are used in medical imaging in order to facilitate the analysis of three-dimensional image data. This study focuses on how to improve the usability of graphical user interfaces of these systems, by gathering user requirements. This is achieved by evaluations of existing systems, together with interviews and observations at clinics in Sweden that use volume rendering to some extent. The usability of the applications of today is not sufficient, according to the use...

  18. Volume rendering in treatment planning for moving targets

    Energy Technology Data Exchange (ETDEWEB)

    Gemmel, Alexander [GSI-Biophysics, Darmstadt (Germany); Massachusetts General Hospital, Boston (United States); Wolfgang, John A.; Chen, George T.Y. [Massachusetts General Hospital, Boston (United States)

    2009-07-01

    Advances in computer technologies have facilitated the development of tools for 3-dimensional visualization of CT-data sets with volume rendering. The company Fovia has introduced a high definition volume rendering engine (HDVR trademark by Fovia Inc., Palo Alto, USA) that is capable of representing large CT data sets with high user interactivity even on standard PCs. Fovia provides a software development kit (SDK) that offers control of all the features of the rendering engine. We extended the SDK by functionalities specific to the task of treatment planning for moving tumors. This included navigation of the patient's anatomy in beam's eye view, fast point-and-click measurement of lung tumor trajectories as well as estimation of range perturbations due to motion by calculation of (differential) water equivalent path lengths for protons and carbon ions on 4D-CT data sets. We present patient examples to demonstrate the advantages and disadvantages of volume rendered images as compared to standard 2-dimensional axial plane images. Furthermore, we show an example of a range perturbation analysis. We conclude that volume rendering is a powerful technique for the representation and analysis of large time resolved data sets in treatment planning.

  19. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias; Pascal Grosset, A.V.; Martin, Tobias; Pegoraro, Vincent; Smith, Sean T.; Hansen, Charles D.

    2011-01-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  20. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias

    2011-06-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  1. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  2. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  3. Mucosal detail at CT virtual reality: surface versus volume rendering.

    Science.gov (United States)

    Hopper, K D; Iyriboz, A T; Wise, S W; Neuman, J D; Mauger, D T; Kasales, C J

    2000-02-01

    To evaluate computed tomographic virtual reality with volumetric versus surface rendering. Virtual reality images were reconstructed for 27 normal or pathologic colonic, gastric, or bronchial structures in four ways: the transition zone (a) reconstructed separately from the wall by using volume rendering; (b) with attenuation equal to air; (c) with attenuation equal to wall (soft tissue); (d) with attenuation halfway between air and wall. The four reconstructed images were randomized. Four experienced imagers blinded to the reconstruction graded them from best to worst with predetermined criteria. All readers rated images with the transition zone as a separate structure as overwhelmingly superior (P Virtual reality is best with volume rendering, with the transition zone (mucosa) between the wall and air reconstructed as a separate structure.

  4. Frequency Analysis of Gradient Estimators in Volume Rendering

    NARCIS (Netherlands)

    Bentum, Marinus Jan; Lichtenbelt, Barthold B.A.; Malzbender, Tom

    1996-01-01

    Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary

  5. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    Science.gov (United States)

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Real-time 3-dimensional fetal echocardiography with an instantaneous volume-rendered display: early description and pictorial essay.

    Science.gov (United States)

    Sklansky, Mark S; DeVore, Greggory R; Wong, Pierre C

    2004-02-01

    Random fetal motion, rapid fetal heart rates, and cumbersome processing algorithms have limited reconstructive approaches to 3-dimensional fetal cardiac imaging. Given the recent development of real-time, instantaneous volume-rendered sonographic displays of volume data, we sought to apply this technology to fetal cardiac imaging. We obtained 1 to 6 volume data sets on each of 30 fetal hearts referred for formal fetal echocardiography. Each volume data set was acquired over 2 to 8 seconds and stored on the system's hard drive. Rendered images were subsequently processed to optimize translucency, smoothing, and orientation and cropped to reveal "surgeon's eye views" of clinically relevant anatomic structures. Qualitative comparison was made with conventional fetal echocardiography for each subject. Volume-rendered displays identified all major abnormalities but failed to identify small ventricular septal defects in 2 patients. Important planes and views not visualized during the actual scans were generated with minimal processing of rendered image displays. Volume-rendered displays tended to have slightly inferior image quality compared with conventional 2-dimensional images. Real-time 3-dimensional echocardiography with instantaneous volume-rendered displays of the fetal heart represents a new approach to fetal cardiac imaging with tremendous clinical potential.

  7. Morphological pyramids in multiresolution MIP rendering of large volume data : Survey and new results

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    We survey and extend nonlinear signal decompositions based on morphological pyramids, and their application to multiresolution maximum intensity projection (MIP) volume rendering with progressive refinement and perfect reconstruction. The structure of the resulting multiresolution rendering

  8. Advantages and disadvantages of 3D ultrasound of thyroid nodules including thin slice volume rendering

    Directory of Open Access Journals (Sweden)

    Slapa Rafal

    2011-01-01

    Full Text Available Abstract Background The purpose of this study was to assess the advantages and disadvantages of 3D gray-scale and power Doppler ultrasound, including thin slice volume rendering (TSVR, applied for evaluation of thyroid nodules. Methods The retrospective evaluation by two observers of volumes of 71 thyroid nodules (55 benign, 16 cancers was performed using a new TSVR technique. Dedicated 4D ultrasound scanner with an automatic 6-12 MHz 4D probe was used. Statistical analysis was performed with Stata v. 8.2. Results Multiple logistic regression analysis demonstrated that independent risk factors of thyroid cancers identified by 3D ultrasound include: (a ill-defined borders of the nodule on MPR presentation, (b a lobulated shape of the nodule in the c-plane and (c a density of central vessels in the nodule within the minimal or maximal ranges. Combination of features provided sensitivity 100% and specificity 60-69% for thyroid cancer. Calcification/microcalcification-like echogenic foci on 3D ultrasound proved not to be a risk factor of thyroid cancer. Storage of the 3D data of the whole nodules enabled subsequent evaluation of new parameters and with new rendering algorithms. Conclusions Our results indicate that 3D ultrasound is a practical and reproducible method for the evaluation of thyroid nodules. 3D ultrasound stores volumes comprising the whole lesion or organ. Future detailed evaluations of the data are possible, looking for features that were not fully appreciated at the time of collection or applying new algorithms for volume rendering in order to gain important information. Three-dimensional ultrasound data could be included in thyroid cancer databases. Further multicenter large scale studies are warranted.

  9. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    Science.gov (United States)

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  10. In Vivo CT Direct Volume Rendering: A Three-Dimensional Anatomical Description of the Heart

    International Nuclear Information System (INIS)

    Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Cacciola, Alberto; Cinquegrani, Maria; Duca, Antonio; Rizzo, Giuseppina; Alati, Emanuela; Gaeta, Michele; Milardi, Demetrio

    2016-01-01

    Since cardiac anatomy continues to play an important role in the practice of medicine and in the development of medical devices, the study of the heart in three dimensions is particularly useful to understand its real structure, function and proper location in the body. This study demonstrates a fine use of direct volume rendering, processing the data set images obtained by Computed Tomography (CT) of the heart of 5 subjects with age range between 18 and 42 years (2 male, 3 female), with no history of any overt cardiac disease. The cardiac structure in CT images was first extracted from the thorax by marking manually the regions of interest on the computer, and then it was stacked to create new volumetric data. The use of a specific algorithm allowed us to observe with a good perception of depth the heart and the skeleton of the thorax at the same time. Besides, in all examined subjects, it was possible to depict its structure and its position within the body and to study the integrity of papillary muscles, the fibrous tissue of cardiac valve and chordae tendineae and the course of coronary arteries. Our results demonstrated that one of the greatest advantages of algorithmic modifications of direct volume rendering parameters is that this method provides much necessary information in a single radiologic study. It implies a better accuracy in the study of the heart, being complementary to other diagnostic methods and facilitating the therapeutic plans

  11. Clustered deep shadow maps for integrated polyhedral and volume rendering

    KAUST Repository

    Bornik, Alexander

    2012-01-01

    This paper presents a hardware-accelerated approach for shadow computation in scenes containing both complex volumetric objects and polyhedral models. Our system is the first hardware accelerated complete implementation of deep shadow maps, which unifies the computation of volumetric and geometric shadows. Up to now such unified computation was limited to software-only rendering . Previous hardware accelerated techniques can handle only geometric or only volumetric scenes - both resulting in the loss of important properties of the original concept. Our approach supports interactive rendering of polyhedrally bounded volumetric objects on the GPU based on ray casting. The ray casting can be conveniently used for both the shadow map computation and the rendering. We show how anti-aliased high-quality shadows are feasible in scenes composed of multiple overlapping translucent objects, and how sparse scenes can be handled efficiently using clustered deep shadow maps. © 2012 Springer-Verlag.

  12. Color-coded volume rendering for three-dimensional reconstructions of CT data

    International Nuclear Information System (INIS)

    Rieker, O.; Mildenberger, P.; Thelen, M.

    1999-01-01

    Purpose: To evaluate a technique of colored three-dimensional reconstructions without segmentation. Material and methods: Color-coded volume rendered images were reconstructed from the volume data of 25 thoracic, abdominal, musculoskeletal, and vascular helical CT scans using commercial software. The CT volume rendered voxels were encoded with color in the following manner. Opacity, hue, lightness, and chroma were assigned to each of four classes defined by CT number. Color-coded reconstructions were compared to the corresponding grey-scale coded reconstructions. Results: Color-coded volume rendering enabled realistic visualization of pathologic findings when there was sufficient difference in CT density. Segmentation was necessary in some cases to demonstrate small details in a complex volume. Conclusion: Color-coded volume rendering allowed lifelike visualisation of CT volumes without the need of segmentation in most cases. (orig.) [de

  13. Development of volume rendering module for real-time visualization system

    International Nuclear Information System (INIS)

    Otani, Takayuki; Muramatsu, Kazuhiro

    2000-03-01

    Volume rendering is a method to visualize the distribution of physical quantities in the three dimensional space from any viewpoint by tracing the ray direction on the ordinary two dimensional monitoring display. It enables to provide the interior information as well as the surfacial one by producing the translucent images. Therefore, it is regarded as a very useful means as well as an important one in the analysis of the computational results of the scientific calculations, although it has, unfortunately, disadvantage to need a large amount of computing time. This report describes algorithm and its performance of the volume rendering soft-ware which was developed as an important functional module in the real-time visualization system PATRAS. This module can directly visualize the computed results on BFC grid. Moreover, it has already realized the speed-up in some parts of the software by the use of a newly developed heuristic technique. This report includes the investigation on the speed-up of the software by parallel processing. (author)

  14. Functionality and Performance Visualization of the Distributed High Quality Volume Renderer (HVR)

    KAUST Repository

    Shaheen, Sara

    2012-07-01

    Volume rendering systems are designed to provide means to enable scientists and a variety of experts to interactively explore volume data through 3D views of the volume. However, volume rendering techniques are computationally intensive tasks. Moreover, parallel distributed volume rendering systems and multi-threading architectures were suggested as natural solutions to provide an acceptable volume rendering performance for very large volume data sizes, such as Electron Microscopy data (EM). This in turn adds another level of complexity when developing and manipulating volume rendering systems. Given that distributed parallel volume rendering systems are among the most complex systems to develop, trace and debug, it is obvious that traditional debugging tools do not provide enough support. As a consequence, there is a great demand to provide tools that are able to facilitate the manipulation of such systems. This can be achieved by utilizing the power of compute graphics in designing visual representations that reflect how the system works and that visualize the current performance state of the system.The work presented is categorized within the field of software Visualization, where Visualization is used to serve visualizing and understanding various software. In this thesis, a number of visual representations that reflect a number of functionality and performance aspects of the distributed HVR, a high quality volume renderer system that uses various techniques to visualize large volume sizes interactively. This work is provided to visualize different stages of the parallel volume rendering pipeline of HVR. This is along with means of performance analysis through a number of flexible and dynamic visualizations that reflect the current state of the system and enables manipulation of them at runtime. Those visualization are aimed to facilitate debugging, understanding and analyzing the distributed HVR.

  15. On the design of a real-time volume rendering engine

    NARCIS (Netherlands)

    Smit, Jaap; Wessels, H.L.F.; van der Horst, A.; Bentum, Marinus Jan

    1992-01-01

    An architecture for a Real-Time Volume Rendering Engine (RT-VRE) is given, capable of computing 750 × 750 × 512 samples from a 3D dataset at a rate of 25 images per second. The RT-VRE uses for this purpose 64 dedicated rendering chips, cooperating with 16 RISC-processors. A plane interpolator

  16. On the design of a real-time volume rendering engine

    NARCIS (Netherlands)

    Smit, Jaap; Wessels, H.J.; van der Horst, A.; Bentum, Marinus Jan

    1995-01-01

    An architecture for a Real-Time Volume Rendering Engine (RT-VRE) is given, capable of computing 750 × 750 × 512 samples from a 3D dataset at a rate of 25 images per second. The RT-VRE uses for this purpose 64 dedicated rendering chips, cooperating with 16 RISC-processors. A plane interpolator

  17. Using neutrosophic graph cut segmentation algorithm for qualified rendering image selection in thyroid elastography video.

    Science.gov (United States)

    Guo, Yanhui; Jiang, Shuang-Quan; Sun, Baiqing; Siuly, Siuly; Şengür, Abdulkadir; Tian, Jia-Wei

    2017-12-01

    Recently, elastography has become very popular in clinical investigation for thyroid cancer detection and diagnosis. In elastogram, the stress results of the thyroid are displayed using pseudo colors. Due to variation of the rendering results in different frames, it is difficult for radiologists to manually select the qualified frame image quickly and efficiently. The purpose of this study is to find the qualified rendering result in the thyroid elastogram. This paper employs an efficient thyroid ultrasound image segmentation algorithm based on neutrosophic graph cut to find the qualified rendering images. Firstly, a thyroid ultrasound image is mapped into neutrosophic set, and an indeterminacy filter is constructed to reduce the indeterminacy of the spatial and intensity information in the image. A graph is defined on the image and the weight for each pixel is represented using the value after indeterminacy filtering. The segmentation results are obtained using a maximum-flow algorithm on the graph. Then the anatomic structure is identified in thyroid ultrasound image. Finally the rendering colors on these anatomic regions are extracted and validated to find the frames which satisfy the selection criteria. To test the performance of the proposed method, a thyroid elastogram dataset is built and totally 33 cases were collected. An experienced radiologist manually evaluates the selection results of the proposed method. Experimental results demonstrate that the proposed method finds the qualified rendering frame with 100% accuracy. The proposed scheme assists the radiologists to diagnose the thyroid diseases using the qualified rendering images.

  18. Mathematical models for volume rendering and neutron transport

    International Nuclear Information System (INIS)

    Max, N.

    1994-09-01

    This paper reviews several different models for light interaction with volume densities of absorbing, glowing, reflecting, or scattering material. They include absorption only, glow only, glow and absorption combined, single scattering of external illumination, and multiple scattering. The models are derived from differential equations, and illustrated on a data set representing a cloud. They are related to corresponding models in neutron transport. The multiple scattering model uses an efficient method to propagate the radiation which does not suffer from the ray effect

  19. Wobbled splatting-a fast perspective volume rendering method for simulation of x-ray images from CT

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-01-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs-which are perspective summed voxel renderings-is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB. (note)

  20. 3-D volume rendering visualization for calculated distributions of diesel spray; Diesel funmu kyodo suchi keisan kekka no sanjigen volume rendering hyoji

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizaki, T; Imanishi, H; Nishida, K; Yamashita, H; Hiroyasu, H; Kaneda, K [Hiroshima University, Hiroshima (Japan)

    1997-10-01

    Three dimensional visualization technique based on volume rendering method has been developed in order to translate calculated results of diesel combustion simulation into realistically spray and flame images. This paper presents an overview of diesel combustion model which has been developed at Hiroshima University, a description of the three dimensional visualization technique, and some examples of spray and flame image generated by this visualization technique. 8 refs., 8 figs., 1 tab.

  1. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    OpenAIRE

    Carlos Jiménez de Parga; Sebastián Rubén Gómez Palomo

    2018-01-01

    This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which ...

  2. A concept of volume rendering guided search process to analyze medical data set.

    Science.gov (United States)

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.

  3. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    Science.gov (United States)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  4. Dynamic Resolution in GPU-Accelerated Volume Rendering to Autostereoscopic Multiview Lenticular Displays

    Directory of Open Access Journals (Sweden)

    Daniel Ruijters

    2008-09-01

    Full Text Available The generation of multiview stereoscopic images of large volume rendered data demands an enormous amount of calculations. We propose a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout. The method is based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiview lenticular screen in a second pass. This compositing phase is facilitated by the fact that the view assignment per subpixel is static, and therefore can be precomputed. We decoupled the resolution of the individual views from the resolution of the composited signal, and adjust the resolution on-the-fly, depending on the available processing resources, in order to maintain interactive refresh rates. The optimal resolution for the volume rendered views is determined by means of an analysis of the lattice of the output signal for the lenticular screen in the Fourier domain.

  5. An Extension of Fourier-Wavelet Volume Rendering by View Interpolation

    NARCIS (Netherlands)

    Westenberg, Michel A.; Roerdink, Jos B.T.M.

    2001-01-01

    This paper describes an extension to Fourier-wavelet volume rendering (FWVR), which is a Fourier domain implementation of the wavelet X-ray transform. This transform combines integration along the line of sight with a simultaneous 2-D wavelet transform in the view plane perpendicular to this line.

  6. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  7. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  8. Use of volume-rendered images in registration of nuclear medicine studies

    International Nuclear Information System (INIS)

    Wallis, J.W.; Miller, T.R.; Hsu, S.S.

    1995-01-01

    A simple operator-guided alignment technique based on volume-rendered images was developed to register tomographic nuclear medicine studies. For each of 2 three-dimensional data sets to be registered, volume-rendered images were generated in 3 orthogonal projections (x,y,z) using the method of maximum-activity projection. Registration was achieved as follows: (a) One of the rendering orientations (e.g. x) was chosen for manipulation; (b) The two dimensional rendering was translated and rotated under operator control to achieve the best alignment as determined by visual assessment; (c) This rotation and translation was then applied to the underlying three-dimensional data set, with updating of the rendered images in each of the orthogonal projections; (d) Another orientation was chosen, and the process repeated. Since manipulation was performed on the small two-dimensional rendered image, feedback was instantaneous. To aid in the visual alignment, difference images and flicker images (toggling between the two data sets) were displayed. Accuracy was assessed by analysis of separate clinical data sets acquired without patient movement. After arbitrary rotation and translation of one of the two data sets, the 2 data sets were registered. Mean registration error was 0.36 pixels, corresponding to a 2.44 mm registration error. Thus, accurate registration can be achieved in under 10 minutes using this simple technique. The accuracy of registration was assessed with use of duplicate SPECT studies originating from separate reconstructions of the data from each of the detectors of a triple-head gamma camera

  9. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  10. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  11. Volume-Rendered 3D Display Of MR Angiograms in the Diagnosis of Cerebral Arteriovenous Malformations

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchiya, K.; Katase, S.; Hachiya, J. [Kyorin Univ. School of Medicine, Tokyo (Japan). Dept. of Radiology; Shiokawa, Y. [Kyorin Univ. School of Medicine, Tokyo (Japan). Dept. of Neurosurgery

    2003-11-01

    Purpose: To determine whether application of a volume-rendered display of 3D time-of-flight (TOF) MR angiography could assist the diagnosis of cerebral arteriovenous malformations (AVMs). Material and Methods: Volume-rendered 3D images of postcontrast 3D time-of-flight MR angiography were compared with conventional angiograms in 12 patients. The correlation between the 3D images and the operative findings was also analyzed in 5 patients. Results: The 3D-displayed images showed all of the feeders and drainers in 10 and 9 patients, respectively. In all patients, the nidus was three-dimensionally visualized. In 3 patients with hematomas, the relationship between the hematoma and the AVM was well demonstrated. The 3D images corresponded well with the operative findings in the 5 patients. Conclusion: This method is of help in assessing the relationship between the components of an AVM as well as that between an AVM and an associated hematoma.

  12. Volume-Rendered 3D Display Of MR Angiograms in the Diagnosis of Cerebral Arteriovenous Malformations

    International Nuclear Information System (INIS)

    Tsuchiya, K.; Katase, S.; Hachiya, J.; Shiokawa, Y.

    2003-01-01

    Purpose: To determine whether application of a volume-rendered display of 3D time-of-flight (TOF) MR angiography could assist the diagnosis of cerebral arteriovenous malformations (AVMs). Material and Methods: Volume-rendered 3D images of postcontrast 3D time-of-flight MR angiography were compared with conventional angiograms in 12 patients. The correlation between the 3D images and the operative findings was also analyzed in 5 patients. Results: The 3D-displayed images showed all of the feeders and drainers in 10 and 9 patients, respectively. In all patients, the nidus was three-dimensionally visualized. In 3 patients with hematomas, the relationship between the hematoma and the AVM was well demonstrated. The 3D images corresponded well with the operative findings in the 5 patients. Conclusion: This method is of help in assessing the relationship between the components of an AVM as well as that between an AVM and an associated hematoma

  13. Visualization of normal and abnormal inner ear with volume rendering technique using multislice spiral CT

    International Nuclear Information System (INIS)

    Ma Hui; Han Ping; Liang Bo; Lei Ziqiao; Liu Fang; Tian Zhiliang

    2006-01-01

    Objective: To evaluate the ability of the volume rendering technique to display the normal and abnormal inner ear structures. Methods: Forty normal earand 61 abnormal inner ears (40 congenital inner ear malformations, 7 labyrinthitis ossificans, and 14 inner ear erosion caused by cholesteatomas) were examined with a MSCT scanner. Axial imaging were performed using the following parameters: 120 kV, 100 mAs, 0.75 mm slice thickness, a pitch factor of 1. The axial images of interested ears were reconstructed with 0.1 mm reconstruction increment and a FOV of 50 mm. The 3D reconstructions were done with volume rendering technique on the workstation. Results: In the subjects without ear disorders a high quality 3D visualization of the inner ear could be achieved. In the patients with inner ear' disorders all inner ear malformations could be clearly displayed on 3D images as follows: (1) Michel deformity (one ear): There was complete absence of all cochlear and vestibular structures. (2) common cavity deformity (3 ears): The cochlea and vestibule were represented by a cystic cavity and couldn't be differentiated from each other. (3)incomplete partition type I (3 ears): The cochlea lacked the entire modiolus and cribriform area, resulting in a cystic appearance. (4) incomplete partition type II (Mondini deformity) (5 ears): The cochlea consisted of 1.5 turns, in which the middle and apical turns coalesced to form a cystic apex. (5) vestibular and semicircular canal malformations (14 ears): Cochlea was normal, vestibule dilated, semicircular canals were absent, hypoplastic or enlarged. (6) dilated vestibular aqueduct (14 ears): The vestibular aqueduct was bell-mouthed. In 7 patients with labyrinthifis ossificans, 3D images failed to clearly show the completeinner ears in 4 ears because of too high ossifications in the membranous labyrinth. In the other 3 ears volume rendering could display the thin cochlea basal turn and the intermittent semicircular canals. In the patients

  14. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    Science.gov (United States)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  15. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering

    International Nuclear Information System (INIS)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L.

    2006-01-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [de

  16. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    Science.gov (United States)

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  17. Use of multidetector row CT with volume renderings in right lobe living liver transplantation

    International Nuclear Information System (INIS)

    Ishifuro, Minoru; Akiyama, Yuji; Kushima, Toshio; Horiguchi, Jun; Nakashige, Aya; Tamura, Akihisa; Marukawa, Kazushi; Fukuda, Hiroshi; Ono, Chiaki; Ito, Katsuhide

    2002-01-01

    Multidetector row CT is a feasible diagnostic tool in pre- and postoperative liver partial transplantation. We can assess vascular anatomy and liver parenchyma as well as volumetry, which provide useful information for both donor selection and surgical planning. Disorders of the vascular and biliary systems are carefully observed in recipients. In addition, we evaluate liver regeneration of both the donor and the recipient by serial volumetry. We present how multidetector row CT with state-of-the-art three-dimensional volume renderings may be used in right lobe liver transplantation. (orig.)

  18. Pulmonary nodules: sensitivity of maximum intensity projection versus that of volume rendering of 3D multidetector CT data

    NARCIS (Netherlands)

    Peloschek, Philipp; Sailer, Johannes; Weber, Michael; Herold, Christian J.; Prokop, Mathias; Schaefer-Prokop, Cornelia

    2007-01-01

    PURPOSE: To prospectively compare maximum intensity projection (MIP) and volume rendering (VR) of multidetector computed tomographic (CT) data for the detection of small intrapulmonary nodules. MATERIALS AND METHODS: This institutional review board-approved prospective study included 20 oncology

  19. Volume rendering based on magnetic resonance imaging: advances in understanding the three-dimensional anatomy of the human knee

    Science.gov (United States)

    Anastasi, Giuseppe; Bramanti, Placido; Di Bella, Paolo; Favaloro, Angelo; Trimarchi, Fabio; Magaudda, Ludovico; Gaeta, Michele; Scribano, Emanuele; Bruschetta, Daniele; Milardi, Demetrio

    2007-01-01

    The choice of medical imaging techniques, for the purpose of the present work aimed at studying the anatomy of the knee, derives from the increasing use of images in diagnostics, research and teaching, and the subsequent importance that these methods are gaining within the scientific community. Medical systems using virtual reality techniques also offer a good alternative to traditional methods, and are considered among the most important tools in the areas of research and teaching. In our work we have shown some possible uses of three-dimensional imaging for the study of the morphology of the normal human knee, and its clinical applications. We used the direct volume rendering technique, and created a data set of images and animations to allow us to visualize the single structures of the human knee in three dimensions. Direct volume rendering makes use of specific algorithms to transform conventional two-dimensional magnetic resonance imaging sets of slices into see-through volume data set images. It is a technique which does not require the construction of intermediate geometric representations, and has the advantage of allowing the visualization of a single image of the full data set, using semi-transparent mapping. Digital images of human structures, and in particular of the knee, offer important information about anatomical structures and their relationships, and are of great value in the planning of surgical procedures. On this basis we studied seven volunteers with an average age of 25 years, who underwent magnetic resonance imaging. After elaboration of the data through post-processing, we analysed the structure of the knee in detail. The aim of our investigation was the three-dimensional image, in order to comprehend better the interactions between anatomical structures. We believe that these results, applied to living subjects, widen the frontiers in the areas of teaching, diagnostics, therapy and scientific research. PMID:17645453

  20. Clinical application of three-dimensional spiral CT cerebral angiography with volume rendering

    International Nuclear Information System (INIS)

    Duan Shaoyin; Huang Xi'en; Kang Jianghe; Zhang Dantong; Lin Qingchi; Cai Guoxiang; Xu Meixin; Pang Ruilin

    2002-01-01

    Objective: To study the methodology and assess the clinical value of three-dimensional CT angiography (3D-CTA) with volume rendering (VR) in cerebral vessels. Methods: Sixty-two patients were examined by means of 3D-CTA with volume rendering. VR was used in the reconstruction of 3D images, and the demonstration of normal vessels and vascular lesions were particularly analyzed. At the same time, comparisons were made between the images of VR and SSD, MIP, and also between the diagnosis of VR-CTA and DSA or postoperative results. Results: In VR images, cerebral vessel routes and vessel cavities were showed clearly, while the relationship among vascular lesions, surrounding vessels, and neighboring structure was distinguished. 50 cases (80.6%) were found positive, 48 of which were correct and 2 were false-positive compared with DSA or postoperative results. The accurate rate of diagnosis was 96.0%. There was no obvious difference in showing the cerebral vessel among the images of VR, SSD and MIP (P > 0.25). Conclusion: Three-dimensional CT cerebral angiography with VR is a new noninvasive effective method. It can even partly replace the DSA. The 3D-images have the characteristics of showing the cerebral vascular cavity and overlapped vessels without cutting the skull

  1. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  2. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  3. State of the Art in Transfer Functions for Direct Volume Rendering

    KAUST Repository

    Ljung, Patric; Krü ger, Jens; Groller, Eduard; Hadwiger, Markus; Hansen, Charles D.; Ynnerman, Anders

    2016-01-01

    A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  4. Virtual endoscopy and 3D volume rendering in the management of frontal sinus fractures.

    Science.gov (United States)

    Belina, Stanko; Cuk, Viseslav; Klapan, Ivica

    2009-12-01

    Frontal sinus fractures (FSF) are commonly caused by traffic accidents, assaults, industrial accidents and gunshot wounds. Classical roentgenography has high proportion of false negative findings in cases of FSF and is not particularly useful in examining the severity of damage to the frontal sinus posterior table and the nasofrontal duct region. High resolution computed tomography was inavoidable during the management of such patients but it may produce large quantity of 2D images. Postprocessing of datasets acquired by high resolution computer tomography from patients with severe head trauma may offer a valuable additional help in diagnostics and surgery planning. We performed virtual endoscopy (VE) and 3D volume rendering (3DVR) on high resolution CT data acquired from a 54-year-old man with with both anterior and posterior frontal sinus wall fracture in order to demonstrate advantages and disadvantages of these methods. Data acquisition was done by Siemens Somatom Emotion scanner and postprocessing was performed with Syngo 2006G software. VE and 3DVR were performed in a man who suffered blunt trauma to his forehead and nose in an traffic accident. Left frontal sinus anterior wall fracture without dislocation and fracture of tabula interna with dislocation were found. 3D position and orientation of fracture lines were shown in by 3D rendering software. We concluded that VE and 3DVR can clearly display the anatomic structure of the paranasal sinuses and nasopharyngeal cavity, revealing damage to the sinus wall caused by a fracture and its relationship to surrounding anatomical structures.

  5. State of the Art in Transfer Functions for Direct Volume Rendering

    KAUST Repository

    Ljung, Patric

    2016-07-04

    A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  6. MRI of the labyrinth with volume rendering for cochlear implants candidates

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Harada, Kuniaki; Shirase, Ryuji; Suzuki, Junpei; Nagahama, Hiroshi

    2009-01-01

    We demonstrated three-dimensional models of the labyrinth by volume rendering (VR) in preoperative assessment for cochlear implantation. MRI data sets were acquired in selected subjects using three-dimensional-fast spin echo sequences (3D-FSE). We produced the three-dimensional models of the labyrinth from axial heavily T2-weighted images. The three-dimensional models distinguished the scala tympani and scala vestibuli and provided multidirectional images. The optimal threshold three-dimensional models clearly showed the focal region of signal loss in the cochlear turns (47.1%) and the presence of inner ear anomalies (17.3%) in our series of patients. This study was concluded that these three-dimensional models by VR provide the oto-surgeon with precise, detailed, and easily interpreted information about the cochlear turns for cochlear implants candidates. (author)

  7. 3D reconstruction from X-ray fluoroscopy for clinical veterinary medicine using differential volume rendering

    International Nuclear Information System (INIS)

    Khongsomboon, K.; Hamamoto, Kazuhiko; Kondo, Shozo

    2007-01-01

    3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the technique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians. (author)

  8. Diagnostic Accuracy of the Volume Rendering Images of Multi-Detector CT for the Detection of Lumbar Transverse Process Fractures

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yun Hak; Chun, Tong Jin [Dept. of Radiology, Eulji University Hospital, Daejeon (Korea, Republic of)

    2012-01-15

    To compare the accuracy of three-dimensional computed tomographic (3D CT) volume rendering techniques with axial images of multi-detector row computed tomography to identify lumbar transverse process (LTP) fractures in trauma patients. We retrospectively evaluated 42 patients with back pain as a result of blunt trauma between January and June of 2010. Two radiologists examined the 3D CT volume rendering images independently. The confirmation of a LTP fracture was based on the consensus of the axial images by the two radiologists. The results of 3D CT volume rendering images were compared with the axial images and the diagnostic powers (sensitivity, specificity, and accuracy) were calculated. Seven of the 42 patients had twenty five lumbar transverse process fractures. The diagnostic power of the 3D CT volume rendering technique is as accurate as axial images. Reader 1, sensitivity 96%, specificity 100%, accuracy 99.9%; and Reader 2 sensitivity 100%, specificity 99.8%, accuracy 99.8%. The accordance of the two radiologists was 99.8%. 3D CT volume rendering images can alternate axial images to detect lumbar transverse process fractures with good image quality.

  9. Interactive definition of transfer functions in volume rendering based on image markers

    International Nuclear Information System (INIS)

    Teistler, Michael; Nowinski, Wieslaw L.; Breiman, Richard S.; Liong, Sauw Ming; Ho, Liang Yoong; Shahab, Atif

    2007-01-01

    Objectives A user interface for transfer function (TF) definition in volume rendering (VR) was developed that allows the user to intuitively assign color and opacity to the original image intensities. This software may surpass solutions currently deployed in clinical practice by simplifying the use of TFs beyond predefined settings that are not always applicable. Materials and methods The TF definition is usually a cumbersome task that requires the user to manipulate graphical representations of the TF (e.g. trapezoids). A new method that allows the user to place markers at points of interest directly on CT and MRI images or orthogonal reformations was developed based on two-dimensional region growing and a few user-definable marker-related parameters. For each user defined image marker, a segment of the transfer function is computed. The resulting TF can also be applied to the slice image views. Results were judged subjectively. Results Each individualized TF can be defined interactively in a few simple steps. For every user interaction, immediate visual feedback is given. Clinicians who tested the application appreciated being able to directly work on familiar slice images to generate the desired 3D views. Conclusion Interactive TF definition can increase the actual utility of VR, help to understand the role of the TF with its variations, and increase the acceptance of VR as a clinical tool. (orig.)

  10. Common crus aplasia: diagnosis by 3D volume rendering imaging using 3DFT-CISS sequence

    International Nuclear Information System (INIS)

    Kim, H.J.; Song, J.W.; Chon, K.-M.; Goh, E.-K.

    2004-01-01

    AIM: The purpose of this study was to evaluate the findings of three-dimensional (3D) volume rendering (VR) imaging in common crus aplasia (CCA) of the inner ear. MATERIALS AND METHODS: Using 3D VR imaging of temporal bone constructive interference in steady state (CISS) magnetic resonance (MR) images, we retrospectively reviewed seven inner ears of six children who were candidates for cochlear implants and who had been diagnosed with CCA. As controls, we used the same method to examine 402 inner ears of 201 patients who had no clinical symptoms or signs of sensorineural hearing loss. Temporal bone MR imaging (MRI) was performed with a 1.5 T MR machine using a CISS sequence, and VR of the inner ear was performed on a work station. Morphological image analysis was performed on rotation views of 3D VR images. RESULTS: In all seven cases, CCA was diagnosed by the absence of the common crus. The remaining superior semicircular canal (SCC) was normal in five and hypoplastic in two inner ears, while the posterior SCC was normal in all seven. One patient showed bilateral symmetrical CCA. Complicated combined anomalies were seen in the cochlea, vestibule and lateral SCC. CONCLUSION: 3D VR imaging findings with MR CISS sequence can directly diagnose CCA. This technique may be useful in delineating detailed anomalies of SCCs

  11. Adaptive statistical iterative reconstruction for volume-rendered computed tomography portovenography. Improvement of image quality

    International Nuclear Information System (INIS)

    Matsuda, Izuru; Hanaoka, Shohei; Akahane, Masaaki

    2010-01-01

    Adaptive statistical iterative reconstruction (ASIR) is a reconstruction technique for computed tomography (CT) that reduces image noise. The purpose of our study was to investigate whether ASIR improves the quality of volume-rendered (VR) CT portovenography. Institutional review board approval, with waived consent, was obtained. A total of 19 patients (12 men, 7 women; mean age 69.0 years; range 25-82 years) suspected of having liver lesions underwent three-phase enhanced CT. VR image sets were prepared with both the conventional method and ASIR. The required time to make VR images was recorded. Two radiologists performed independent qualitative evaluations of the image sets. The Wilcoxon signed-rank test was used for statistical analysis. Contrast-noise ratios (CNRs) of the portal and hepatic vein were also evaluated. Overall image quality was significantly improved by ASIR (P<0.0001 and P=0.0155 for each radiologist). ASIR enhanced CNRs of the portal and hepatic vein significantly (P<0.0001). The time required to create VR images was significantly shorter with ASIR (84.7 vs. 117.1 s; P=0.014). ASIR enhances CNRs and improves image quality in VR CT portovenography. It also shortens the time required to create liver VR CT portovenographs. (author)

  12. An interactive tool for CT volume rendering and sagittal plane-picking of the prostate for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Jani, Ashesh B.; Pelizzari, Charles A.; Chen, George T.Y.; Grzezcszuk, Robert P.; Vijayakumar, Srinivasan

    1997-01-01

    Objective: Accurate and precise target volume and critical structure definition is a basic necessity in radiotherapy. The prostate, particularly the apex (an important potential site of recurrence in prostate cancer patients), is a challenging structure to define using any modality, including conventional axial CT. Invasive or expensive techniques, such as retrograde urethrography or MRI, could be avoided if localization of the prostate were possible using information already available on the planning CT. Our primary objective was to build a software tool to determine whether volume rendering and sagittal plane-picking, which are CT-based, noninvasive visualization techniques, were of utility in radiotherapy treatment planning for the prostate. Methods: Using AVS (Application Visualization System) on a Silicon Graphics Indigo 2 High Impact workstation, we have developed a tool that enables the clinician to efficiently navigate a CT volume and to use volume rendering and sagittal plane-picking to better define structures at any anatomic site. We applied the tool to the specific example of the prostate to compare the two visualization techniques with the current standard of axial CT. The prostate was defined on 80-slice CT scans (scanning thickness 4mm, pixel size 2mm x 2mm) of prostate cancer patients using axial CT images, volume-rendered CT images, and sagittal plane-picked images. Results: The navigation of the prostate using the different visualization techniques qualitatively demonstrated that the sagittal plane-picked images, and even more so the volume-rendered images, revealed the prostate (particularly the lower border) better in relationship to the surrounding regional anatomy (bladder, rectum, pelvis, and penile structures) than did the axial images. A quantitative comparison of the target volumes obtained by navigating using the different visualization techniques demonstrated that, when compared to the prostate volume defined on axial CT, a larger volume

  13. Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.

    Science.gov (United States)

    Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres

    2018-02-01

    Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    Science.gov (United States)

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  15. Usefulness of PC based 3D volume rendering technique in the evaluation of suspected aneurysm on brain MRA

    International Nuclear Information System (INIS)

    Baek, Seung Il; Lee, Ghi Jai; Shim, Jae Chan; Bang, Sun Woo; Ryu, Seok Jong; Kim, Ho Kyun

    2002-01-01

    To evaluated usefulness of volume rending technique using 3D visualization software on PC in patients with suspected intracranial aneurysm on brain MRA. We analyzed prospectively 21 patients with suspected aneurysms on the routine MIP images which were obtained 15 .deg. C increment along axial and sagittal plane, among 135 patients in whom brain MRA was done due to stroke symptoms for recent 5 months. The locations were the anterior communicating artery (A-com) in 8 patients, the posterior communicating artery (P-com) in 3, the ICA bifurcation in 5, the MCA bifurcation in 4, and the basilar tip in one. Male to female ratio was 14:7 and mean age was 62 years. MRA source images were sent to PC through LAN, and the existence of aneurysm was evaluated with volume rendering technique using 3D visualization software on PC. The presence or absence of aneurysm on MIP and volume rendering images was decided by the consensus of two radiologists. We found the aneurysms with volume rendering technique, from 1 patient among 8 patients with suspected aneurysm at A-com and also 1 patient among 3 patients with suspected aneurysm at P=com on routine MIP images. Confirmative angiography and interventional procedures were done in these 2 patients. The causes for mimicking the aneurysm on MIP were flow displacement artifact in 9, normal P-com infundibulum in 2, and overlapped or narrowed vessels in 8 patients, and among them confirmative angiography was done in 2 patient. Volume rendering technique using visualization software on PC is useful to scrutinize the suspected aneurysm on routine MIP images and to avoid further invasive angiography

  16. Role of volume rendered 3-D computed tomography in conservative management of trauma-related thoracic injuries.

    LENUS (Irish Health Repository)

    OʼLeary, Donal Peter

    2012-09-01

    Pneumatic nail guns are a tool used commonly in the construction industry and are widely available. Accidental injuries from nail guns are common, and several cases of suicide using a nail gun have been reported. Computed tomographic (CT) imaging, together with echocardiography, has been shown to be the gold standard for investigation of these cases. We present a case of a 55-year-old man who presented to the accident and emergency unit of a community hospital following an accidental pneumatic nail gun injury to his thorax. Volume-rendered CT of the thorax allowed an accurate assessment of the thoracic injuries sustained by this patient. As there was no evidence of any acute life-threatening injury, a sternotomy was avoided and the patient was observed closely until discharge. In conclusion, volume-rendered 3-dimensional CT can greatly help in the decision to avoid an unnecessary sternotomy in patients with a thoracic nail gun injury.

  17. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  18. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    Science.gov (United States)

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a

  19. Three dimensional volume rendering virtual endoscopy of the ossicles using a multi-row detector CT: applications and limitations

    International Nuclear Information System (INIS)

    Kim, Su Yeon; Choi, Sun Seob; Kang, Myung Jin; Shin, Tae Beom; Lee, Ki Nam; Kang, Myung Koo

    2005-01-01

    This study was conducted to know the applications and limitations of three dimensional volume rendering virtual endoscopy of the ossicles using a multi-row detector CT. This study examined 25 patients who underwent temporal bone CT using a 16-row detector CT as a result of hearing problems or trauma. The axial CT scan of the temporal bone was performed with a 0.6 mm collimation, and a reconstruction was carried out with a U70u sharp of kernel value, a 1 mm thickness and 0.5-1.0 mm increments. After observing the ossicles in the axial and coronal images, virtual endoscopy was performed using a three dimensional volume rendering technique with a threshold value of-500 HU. The intra-operative otoendoscopy was performed in 12 ears, and was compared with the virtual endoscopy findings. Virtual endoscopy of the 29 ears without hearing problems demonstrated hypoplastic or an incomplete depiction of the stapes superstructures in 25 ears and a normal depiction in 4 ears. Virtual endoscopy of 21 ears with hearing problems demonstrated no ossicles in 1 ears, no malleus in 3 ears, a malleoincudal subluxation in 6 ears, a dysplastic incus in 5 ears, an incudostapedial subluxation in 9 ears, dysplastic stapes in 2 ears, a hypoplastic or incomplete depiction of the stapes in 16 ears and no stapes in 1 ears. In contrast to the intra-operative otoendoscopy, 8 out of 12 ears showed a hypoplastic or deformed stapes in the virtual endoscopy. Volume rendering virtual endoscopy using a multi-row detector CT is an excellent method for evaluation the ossicles in three dimension, even thought the partial volume effect for the stapes superstructures needs to be considered

  20. A Volume Clearing Algorithm for Muon Tomography

    OpenAIRE

    Mitra, D.; Day, K.; Hohlmann, M.

    2014-01-01

    The primary objective is to enhance muon-tomographic image reconstruction capability by providing distinctive information in terms of deciding on the properties of regions or voxels within a probed volume "V" during any point of scanning: threat type, non-threat type, or not-sufficient data. An algorithm (MTclear) is being developed to ray-trace muon tracks and count how many straight tracks are passing through a voxel. If a voxel "v" has sufficient number of straight tracks (t), then "v" is ...

  1. Evaluation of obstructive airway lesions in complex congenital heart disease using composite volume-rendered images from multislice CT

    International Nuclear Information System (INIS)

    Choo, Ki Seok; Kim, Chang Won; Lee, Tae Hong; Kim, Suk; Kim, Kun Il; Lee, Hyoung Doo; Ban, Ji Eun; Sung, Si Chan; Chang, Yun Hee

    2006-01-01

    Multislice CT (MSCT) allows high-quality volume-rendered (VR) and composite volume-rendered images. To investigate the clinical usefulness of composite VR images in the evaluation of the relationship between cardiovascular structures and the airway in children with complex congenital heart disease (CHD). Four- or 16-slice MSCT scanning was performed consecutively in 77 children (mean age 6.4 months) with CHD and respiratory symptoms, a chest radiographic abnormality, or abnormal course of the pulmonary artery on ECHO. MSCT scanning was performed during breathing or after sedation. Contrast medium (2 ml/kg) was administered through a pedal venous route or arm vein in all patients. The VR technique was used to reconstruct the cardiovascular structures and airway, and then both VR images were composed using the commercial software (VoxelPlus 2 ; Daejeon, Korea). Stenoses were seen in the trachea in 1 patient and in the bronchi in 14 patients (19%). Other patients with complex CHD did not have significant airway stenoses. Composite VR images with MSCT can provide more exact airway images in relationship to the surrounding cardiovascular structures and thus help in optimizing management strategies in treating CHD. (orig.)

  2. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  3. Development of a computer simulation system of intraoral radiography using perspective volume rendering of CT data

    International Nuclear Information System (INIS)

    Okamura, Kazutoshi; Tanaka, Takemasa; Yoshiura, Kazunori; Tokumori, Kenji; Kanda, Shigenobu

    2002-01-01

    The purpose of this study was to evaluate the usefulness of a computer simulation system for intraoral radiography as an educational aid for radiographic training for dental students. A dried skull was scanned with a multidetector CT, and the series of slice data was transferred to a workstation. A software AVS Express Developer was used to construct the x-ray projected images from the CT slice data. Geometrical reproducibilities were confirmed using numerical phantoms. We simulated images using the perspective projection method with an average value algorithm on this software. Simulated images were compared with conventional film images projected from the same geometrical positions, including eccentric projection. Furthermore, to confirm the changes of the image depending on the projection angles of the x-ray beam, we constructed simulation images in which the root apexes were enhanced with the maximum value algorithm. Using this method, high resolution simulated images with perspective projection, as opposed to parallel, were constructed. Comparing with conventional film images, all major anatomic components could be visualized easily. Any intraoral radiographs at an arbitrary angular projection could be simulated, which was impossible in the conventional training schema for radiographic technique. Therefore, not only standard projected images but also eccentric projections could be displayed. A computer simulation system of intraoral radiography with this method may be useful for training in intraoral radiographic technique for dental students. (author)

  4. Three-dimensional reconstructions of the orbital floor by volume-rendering of multidetector-row CT data

    International Nuclear Information System (INIS)

    Yoshikawa, Tetsuya; Miyajima, Akira; Fujita, Yuko; Yamada, Kazuo

    2011-01-01

    The advent of 3D-CT has made the evaluation of complicated facial fractures much easier than before. However, its use in injuries involving the orbital floor has been limited by the difficulty of visualizing the thin bony structures given artifacts caused by the partial volume effect. Nevertheless, high-technology machines such as multidetector-row CT (MDCT) and new-generation software have improved the quality of 3D imaging, and this paper describes a procedure for obtaining better visualization of the orbital floor using a MDCT scanner. Forty trauma cases were subject to MDCT: 13 with injury to the orbital floor, and 27 without. All scans were performed in the standard manner, at slice thicknesses of 0.5 mm. 3D-CT images were created overlooking the orbital floor including soft tissue to minimize the pseudo-foramen artifacts produced through volume rendering. Bone deficits, fracture lines, and grafted bone were visible in the 3D images, and visualization was supported by the ready creation of stereoscopic images from MDCT volume data. Measurement of the pseudo-foramen revealed approximately half the artifacts to be less than 5 mm in diameter, suggesting practicality of this method without subjecting the patient to undue increases in radiation exposure in the treatment of cases involving injury to the orbital floor. (author)

  5. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    International Nuclear Information System (INIS)

    Fink, C.; Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M.; Ott, R.C.; Wiesel, M.

    2003-01-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  6. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Fink, C. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Abteilung Onkologische Diagnostik und Therapie, Forschungsschwerpunkt Radiologische Diagnostik und Therapie, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Ott, R.C.; Wiesel, M. [Abteilung Urologie und Poliklinik, Chirurgische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)

    2003-04-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  7. Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.

    Science.gov (United States)

    Holub, Joseph; Winer, Eliot

    2017-12-01

    Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

  8. Three-dimensional volume rendering of tibiofibular joint space and quantitative analysis of change in volume due to tibiofibular syndesmosis diastases

    International Nuclear Information System (INIS)

    Taser, F.; Shafiq, Q.; Ebraheim, N.A.

    2006-01-01

    The diagnosis of ankle syndesmosis injuries is made by various imaging techniques. The present study was undertaken to examine whether the three-dimensional reconstruction of axial CT images and calculation of the volume of tibiofibular joint space enhances the sensitivity of diastases diagnoses or not. Six adult cadaveric ankle specimens were used for spiral CT-scan assessment of tibiofibular syndesmosis. After the specimens were dissected, external fixation was performed and diastases of 1, 2, and 3 mm was simulated by a precalibrated device. Helical CT scans were obtained with 1.0-mm slice thickness. The data was transferred to the computer software AcquariusNET. Then the contours of the tibiofibular syndesmosis joint space were outlined on each axial CT slice and the collection of these slices were stacked using the computer software AutoCAD 2005, according to the spatial arrangement and geometrical coordinates between each slice, to produce a three-dimensional reconstruction of the joint space. The area of each slice and the volume of the entire tibiofibular joint space were calculated. The tibiofibular joint space at the 10th-mm slice level was also measured on axial CT scan images at normal, 1, 2 and 3-mm joint space diastases. The three-dimensional volume-rendering of the tibiofibular syndesmosis joint space from the spiral CT data demonstrated the shape of the joint space and has been found to be a sensitive method for calculating joint space volume. We found that, from normal to 1 mm, a 1-mm diastasis increases approximately 43% of the joint space volume, while from 1 to 3 mm, there is about a 20% increase for each 1-mm increase. Volume calculation using this method can be performed in cases of syndesmotic instability after ankle injuries and for preoperative and postoperative evaluation of the integrity of the tibiofibular syndesmosis. (orig.)

  9. Rendering of Gemstones

    OpenAIRE

    Krtek, Lukáš

    2012-01-01

    The distinctive appearance of gemstones is caused by the way light reflects and refracts multiple times inside of them. The goal of this thesis is to design and implement an application for photorealistic rendering of gems. The most important effects we aim for are realistic dispersion of light and refractive caustics. For rendering we use well-known algorithm of path tracing with an experimental modification for faster computation of caustic effects. In this thesis we also design and impleme...

  10. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  11. Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering; Die computerassistierte Operationsplanung in der Abdominalchirurgie des Kindes. 3D-Visualisierung mittels ''volume rendering'' in der MRT

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, P.; Holland-Cunz, S.; Waag, K.L. [Universitaetsklinikum Heidelberg (Germany). Kinderchirurgie; Troeger, J. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Schenk, J.P. [Universitaetsklinikum Heidelberg, (Germany). Paediatrische Radiologie; Universitaetsklinikum, Paediatrische Radiologie, Heidelberg (Germany)

    2006-08-15

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this. A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning. (orig.) [German] Komplexe Operationen bei ausgepraegten pathologischen Veraenderungen anatomischer Strukturen des kindlichen Abdomens benoetigen eine exakte Operationsvorbereitung. 3D-Visualisierung und computerassistierte Operationsplanung anhand von CT-Daten finden fuer schwierige chirurgische Eingriffe bei Erwachsenen in zunehmendem Masse Anwendung. Aus strahlenhygienischen Gruenden und bei besserer Weichteildifferenzierung ist jedoch neben der Sonographie die Magnetresonanztomographie (MRT) bei Kindern das Diagnostikum der Wahl. Die 3D-Visualisierung dieser MRT-Daten ist dabei jedoch aufgrund vielfaeltiger Schwierigkeiten bisher nicht durchgefuehrt worden, obwohl sich das Gebiet embryonaler Fehlbildungen und Tumoren geradezu anbietet. Vorgestellt wird eine weiterentwickelte und an die Fragestellungen der abdominellen Kinderchirurgie angepasste, sehr leistungsstarke raycastingbasierte 3D-volume-rendering-Software (VG Studio Max 1

  12. Freely-available, true-color volume rendering software and cryohistology data sets for virtual exploration of the temporal bone anatomy.

    Science.gov (United States)

    Kahrs, Lüder Alexander; Labadie, Robert Frederick

    2013-01-01

    Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.

  13. Virtual Whipple: preoperative surgical planning with volume-rendered MDCT images to identify arterial variants relevant to the Whipple procedure.

    Science.gov (United States)

    Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B

    2007-05-01

    The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.

  14. Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.

    Science.gov (United States)

    Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K

    1995-07-01

    The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.

  15. New reconstruction algorithm in helical-volume CT

    International Nuclear Information System (INIS)

    Toki, Y.; Rifu, T.; Aradate, H.; Hirao, Y.; Ohyama, N.

    1990-01-01

    This paper reports on helical scanning that is an application of continuous scanning CT to acquire volume data in a short time for three-dimensional study. In a helical scan, the patient couch sustains movement during continuous-rotation scanning and then the acquired data is processed to synthesize a projection data set of vertical section by interpolation. But the synthesized section is not thin enough; also, the image may have artifacts caused by couch movement. A new reconstruction algorithm that helps resolve such problems has been developed and compared with the ordinary algorithm. The authors constructed a helical scan system based on TCT-900S, which can perform 1-second rotation continuously for 30 seconds. The authors measured section thickness using both algorithms on an AAPM phantom, and we also compared degree of artifacts on clinical data

  16. Evaluation of the relationship between extremity soft tissue sarcomas and adjacent major vessels using contrast-enhanced multidetector CT and three-dimensional volume-rendered CT angiography - A preliminary study

    International Nuclear Information System (INIS)

    Li, YangKang; Lin, JianBang; Cai, AiQun; Zhou, XiuGuo; Zheng, Yu; Wei, XiaoLong; Cheng, Ying; Liu, GuoRui

    2013-01-01

    Background: Accurate description of the relationship between extremity soft tissue sarcoma and the adjacent major vessels is crucial for successful surgery. In addition to magnetic resonance imaging (MRI) or in patients who cannot undergo MRI, two-dimensional (2D) postcontrast computed tomography (CT) images and three-dimensional (3D) volume-rendered CT angiography may be valuable alternative imaging techniques for preoperative evaluation of extremity sarcomas. Purpose: To preoperatively assess extremity sarcomas using multidetector CT (MDCT), with emphasis on postcontrast MDCT images and 3D volume-rendered MDCT angiography in evaluating the relationship between tumors and adjacent major vessels. Material and Methods: MDCT examinations were performed on 13 patients with non-metastatic extremity sarcomas. Conventional CT images and 3D volume-rendered CT angiography were evaluated, with focus on the relationship between tumors and adjacent major vessels. Kappa consistency statistics were performed with surgery serving as the reference standard. Results: The relationship between sarcomas and adjacent vessels was described as one of three patterns: proximity, adhesion, and encasement. Proximity was seen in five cases on postcontrast CT images or in eight cases on volume-rendered images. Adhesion was seen in three cases on both postcontrast CT images and volume-rendered images. Encasement was seen in five cases on postcontrast CT images or in two cases on volume-rendered images. Compared to surgical results, postcontrast CT images had 100% sensitivity, 83.3% specificity, 87.5% positive predictive value, 100% negative predictive value, and 92.3% accuracy in the detection of vascular invasion (κ = 0.843, P = 0.002). 3D volume-rendered CT angiography had 71.4% sensitivity, 100% specificity, 100% positive predictive value, 75% negative predictive value, and 84.6% accuracy in the detection of vascular invasion (κ = 0.698, P = 0.008). On volume-rendered images, all cases

  17. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  18. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  19. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  20. Single minimum incision endoscopic radical nephrectomy for renal tumors with preoperative virtual navigation using 3D-CT volume-rendering

    Directory of Open Access Journals (Sweden)

    Shioyama Yasukazu

    2010-04-01

    Full Text Available Abstract Background Single minimum incision endoscopic surgery (MIES involves the use of a flexible high-definition laparoscope to facilitate open surgery. We reviewed our method of radical nephrectomy for renal tumors, which is single MIES combined with preoperative virtual surgery employing three-dimensional CT images reconstructed by the volume rendering method (3D-CT images in order to safely and appropriately approach the renal hilar vessels. We also assessed the usefulness of 3D-CT images. Methods Radical nephrectomy was done by single MIES via the translumbar approach in 80 consecutive patients. We performed the initial 20 MIES nephrectomies without preoperative 3D-CT images and the subsequent 60 MIES nephrectomies with preoperative 3D-CT images for evaluation of the renal hilar vessels and the relation of each tumor to the surrounding structures. On the basis of the 3D information, preoperative virtual surgery was performed with a computer. Results Single MIES nephrectomy was successful in all patients. In the 60 patients who underwent 3D-CT, the number of renal arteries and veins corresponded exactly with the preoperative 3D-CT data (100% sensitivity and 100% specificity. These 60 nephrectomies were completed with a shorter operating time and smaller blood loss than the initial 20 nephrectomies. Conclusions Single MIES radical nephrectomy combined with 3D-CT and virtual surgery achieved a shorter operating time and less blood loss, possibly due to safer and easier handling of the renal hilar vessels.

  1. Value of three-dimensional volume rendering images in the assessment of the centrality index for preoperative planning in patients with renal masses.

    Science.gov (United States)

    Sofia, C; Magno, C; Silipigni, S; Cantisani, V; Mucciardi, G; Sottile, F; Inferrera, A; Mazziotti, S; Ascenti, G

    2017-01-01

    To evaluate the precision of the centrality index (CI) measurement on three-dimensional (3D) volume rendering technique (VRT) images in patients with renal masses, compared to its standard measurement on axial images. Sixty-five patients with renal lesions underwent contrast-enhanced multidetector (MD) computed tomography (CT) for preoperative imaging. Two readers calculated the CI on two-dimensional axial images and on VRT images, measuring it in the plane that the tumour and centre of the kidney were lying in. Correlation and agreement of interobserver measurements and inter-method results were calculated using intraclass correlation (ICC) coefficients and the Bland-Altman method. Time saving was also calculated. The correlation coefficients were r=0.99 (ppresent study showed that VRT and axial images produce almost identical values of CI, with the advantages of greater ease of execution and a time saving of almost 50% for 3D VRT images. In addition, VRT provides an integrated perspective that can better assist surgeons in clinical decision making and in operative planning, suggesting this technique as a possible standard method for CI measurement. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  2. Value of 3D-Volume Rendering in the Assessment of Coronary Arteries with Retrospectively Ecg-Gated Multislice Spiral CT

    International Nuclear Information System (INIS)

    Mahnken, A.H.; Wildberger, J.E.; Dedden, K.; Schmitz-Rode, T.; Guenther, R.W.; Sinha, A.M.; Hoffmann, R.; Stanzel, S.

    2003-01-01

    Purpose: To assess the diagnostic value and measurement precision of 3D volume rendering technique (3D-VRT) from retrospectively ECG-gated multislice spiral CT (MSCT) data sets for imaging of the coronary arteries. Material and Methods: In 35 patients, retrospectively ECG-gated MSCT of the heart using a four detector row MSCT scanner with a standardized examination protocol was performed as well as quantitative X-ray coronary angiography (QCA). The MSCT data was assessed on segmental basis using 3D-VRT exclusively. The coronary artery diameters were measured at the origin of each main coronary branch and 1 cm, 3 cm and 5 cm distally. The minimum, maximum and mean diameters were determined from MSCT angiography and compared to QCA. Results: A total of 353 of 525 (67.2%) coronary artery segments were assessable by MSCT angiography. The proximal segments were more often assessable when compared to the distal segments. Stenoses were detected with a sensitivity of 82.6% and a specificity of 92.8%. According to the Bland-Altman method the mean differences between QCA and MSCT ranged from 0.55 to 1.07 mm with limits of agreement from 2.2 mm to 2.7 mm. Conclusion: When compared to QCA, the ability of 3D-VRT to quantitatively assess coronary artery diameters and coronary artery stenoses is insufficient for clinical purposes

  3. Contrast-enhanced MDCT gastrography for detection of early gastric cancer: Initial assessment of “wall-carving image”, a novel volume rendering technique

    International Nuclear Information System (INIS)

    Komori, Masahiro; Kawanami, Satoshi; Tsurumaru, Daisuke; Matsuura, Shuji; Hiraka, Kiyohisa; Nishie, Akihiro; Honda, Hiroshi

    2012-01-01

    Objective: We developed a new volume rendering technique, the CT gastrography wall carving image (WC) technique, which provides a clear visualization of localized enhanced tumors in the gastric wall. We evaluated the diagnostic performance of the WC as an adjunct to conventional images in detecting early gastric cancer (EGC). Materials and methods: Thirty-nine patients with 43 EGCs underwent contrast-enhanced MDCT gastrography for preoperative examination. Two observers independently reviewed the images under three different conditions: term 1, Axial CT; term 2, Axial CT, MPR and VE; and term 3, Axial CT, MPR, VE and WC for the detection of EGC. The accuracy of each condition as reviewed by each of the two observers was evaluated by receiver operating characteristic analysis. Interobserver agreement was calculated using weighted-κ statistics. Results: The best diagnostic performance and interobserver agreement were obtained in term 3. The AUCs of the two observers for terms 1, 2, and 3 were 0.63, 0.73, and 0.84, and 0.57, 0.73, and 0.76, respectively. The interobserver agreement improved from fair at term 1 to substantial at term 3. Conclusions: The addition of WC to conventional MDCT display improved the diagnostic accuracy and interobserver reproducibility for the detection of ECG. WC represents a suitable alternative for the visualization of localized enhanced tumors in the gastric wall.

  4. Contrast-enhanced computed tomography angiography and volume-rendered imaging for evaluation of cellophane banding in a dog with extrahepatic portosystemic shunt

    Directory of Open Access Journals (Sweden)

    H. Yoon

    2011-04-01

    Full Text Available A 4-year-old, 1.8 kg, male, castrated Maltese was presented for evaluation of urolithiasis. Urinary calculi were composed of ammonium biurate. Preprandial and postprandial bile acids were 44.2 and 187.3 μmol/ , respectively (reference ranges 0–10 and 0–20 μmol/ , respectively. Single-phase contrast-enhanced computed tomography angiography (CTA with volume-rendered imaging (VRI was obtained. VRI revealed a portocaval shunt originating just cranial to a tributary of the gastroduodenal vein and draining into the caudal vena cava at the level of the epiploic foramen. CTA revealed a 3.66 mm-diameter shunt measured at the level of the termination of the shunt and a 3.79 mm-diameter portal vein measured at the level between the origin of the shunt and the porta of the liver. Surgery was performed using cellophane banding without attenuation. Follow-up single-phase CTA with VRI was obtained 10 weeks after surgery. VRI revealed no evidence of portosystemic communication on the level of a cellophane band and caudal to the cellophane band. CTA demonstrated an increased portal vein diameter (3.79–5.27 mm measured at the level between the origin of the shunt and the porta of the liver. Preprandial and postprandial bile acids were 25 and 12.5 μmol/ , respectively (aforementioned respective reference ranges, 3 months post-surgery. No problems were evident at 6 months.

  5. Volume-rendered hemorrhage-responsible arteriogram created by 64 multidetector-row CT during aortography: utility for catheterization in transcatheter arterial embolization for acute arterial bleeding.

    Science.gov (United States)

    Minamiguchi, Hiroki; Kawai, Nobuyuki; Sato, Morio; Ikoma, Akira; Sanda, Hiroki; Nakata, Kouhei; Tanaka, Fumihiro; Nakai, Motoki; Sonomura, Tetsuo; Murotani, Kazuhiro; Hosokawa, Seiki; Nishioku, Tadayoshi

    2014-01-01

    Aortography for detecting hemorrhage is limited when determining the catheter treatment strategy because the artery responsible for hemorrhage commonly overlaps organs and non-responsible arteries. Selective catheterization of untargeted arteries would result in repeated arteriography, large volumes of contrast medium, and extended time. A volume-rendered hemorrhage-responsible arteriogram created with 64 multidetector-row CT (64MDCT) during aortography (MDCTAo) can be used both for hemorrhage mapping and catheter navigation. The MDCTAo depicted hemorrhage in 61 of 71 cases of suspected acute arterial bleeding treated at our institute in the last 3 years. Complete hemostasis by embolization was achieved in all cases. The hemorrhage-responsible arteriogram was used for navigation during catheterization, thus assisting successful embolization. Hemorrhage was not visualized in the remaining 10 patients, of whom 6 had a pseudoaneurysm in a visceral artery; 1 with urinary bladder bleeding and 1 with chest wall hemorrhage had gaze tamponade; and 1 with urinary bladder hemorrhage and 1 with uterine hemorrhage had spastic arteries. Six patients with pseudoaneurysm underwent preventive embolization and the other 4 patients were managed by watchful observation. MDCTAo has the advantage of depicting the arteries responsible for hemoptysis, whether from the bronchial arteries or other systemic arteries, in a single scan. MDCTAo is particularly useful for identifying the source of acute arterial bleeding in the pancreatic arcade area, which is supplied by both the celiac and superior mesenteric arteries. In a case of pelvic hemorrhage, MDCTAo identified the responsible artery from among numerous overlapping visceral arteries that branched from the internal iliac arteries. In conclusion, a hemorrhage-responsible arteriogram created by 64MDCT immediately before catheterization is useful for deciding the catheter treatment strategy for acute arterial bleeding.

  6. A Sort-Last Rendering System over an Optical Backplane

    Directory of Open Access Journals (Sweden)

    Yasuhiro Kirihata

    2005-06-01

    Full Text Available Sort-Last is a computer graphics technique for rendering extremely large data sets on clusters of computers. Sort-Last works by dividing the data set into even-sized chunks for parallel rendering and then composing the images to form the final result. Since sort-last rendering requires the movement of large amounts of image data among cluster nodes, the network interconnecting the nodes becomes a major bottleneck. In this paper, we describe a sort-last rendering system implemented on a cluster of computers whose nodes are connected by an all-optical switch. The rendering system introduces the notion of the Photonic Computing Engine, a computing system built dynamically by using the optical switch to create dedicated network connections among cluster nodes. The sort-last volume rendering algorithm was implemented on the Photonic Computing Engine, and its performance is evaluated. Prelimi- nary experiments show that performance is affected by the image composition time and average payload size. In an attempt to stabilize the performance of the system, we have designed a flow control mechanism that uses feedback messages to dynamically adjust the data flow rate within the computing engine.

  7. Diagnostic accuracy of a volume-rendered computed tomography movie and other computed tomography-based imaging methods in assessment of renal vascular anatomy for laparoscopic donor nephrectomy.

    Science.gov (United States)

    Yamamoto, Shingo; Tanooka, Masao; Ando, Kumiko; Yamano, Toshiko; Ishikura, Reiichi; Nojima, Michio; Hirota, Shozo; Shima, Hiroki

    2009-12-01

    To evaluate the diagnostic accuracy of computed tomography (CT)-based imaging methods for assessing renal vascular anatomy, imaging studies, including standard axial CT, three-dimensional volume-rendered CT (3DVR-CT), and a 3DVR-CT movie, were performed on 30 patients who underwent laparoscopic donor nephrectomy (10 right side, 20 left side) for predicting the location of the renal arteries and renal, adrenal, gonadal, and lumbar veins. These findings were compared with videos obtained during the operation. Two of 37 renal arteries observed intraoperatively were missed by standard axial CT and 3DVR-CT, whereas all arteries were identified by the 3DVR-CT movie. Two of 36 renal veins were missed by standard axial CT and 3DVR-CT, whereas 1 was missed by the 3DVR-CT movie. In 20 left renal hilar anatomical structures, 20 adrenal, 20 gonadal, and 22 lumbar veins were observed during the operation. Preoperatively, the standard axial CT, 3DVR-CT, and 3DVR-CT movie detected 11, 19, and 20 adrenal veins; 13, 14, and 19 gonadal veins; and 6, 11, and 15 lumbar veins, respectively. Overall, of 135 renal vascular structures, the standard axial CT, 3DVR-CT, and 3DVR-CT movie accurately detected 99 (73.3%), 113 (83.7%), and 126 (93.3%) vessels, respectively, which indicated that the 3DVR-CT movie demonstrated a significantly higher detection rate than other CT-based imaging methods (P renal vascular anatomy before laparoscopic donor nephrectomy.

  8. Differentiating aneurysm from infundibular dilatation by volume rendering MRA. Techniques for improving depiction of the posterior communicating and anterior choroidal arteries

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takaaki; Ito, Takeo; Hasunuma, Masahiro; Sakamoto, Yasuo; Kohama, Ikuhide; Yonemori, Terutake; Izumo, Masaki [Hakodate Shintoshi Hospital, Hokkaido (Japan)

    2002-12-01

    With the spread of brain dock procedures, non-invasive magnetic resonance angiography (MRA) is being utilized to broadly screen for brain blood vessel diseases. However, diagnosis of cerebral aneurysm can be difficult by routine MRA. In particular, differentiating aneurysms and infundibular dilatations (IDS) of the posterior communicating artery (PCoA) and anterior choroidal artery (AChA) at their bifurcations with the internal carotid artery (ICA) is extremely difficult and additional studies are frequently necessary. In this situation, three-dimensional computed tomography angiography (3D-CTA) and cerebral angiography have been utilized, but both techniques are invasive. Furthermore, images from cerebral angiography are only two-dimensional, and 3D-CTA requires differentiation between aneurysm and ID by observing configurational changes at the apex of the protrusion and by following gradual changes to the threshold. We therefore undertook the following steps to improve both depiction of the PCoA and AChA and differential diagnosis between aneurysm and ID: reduced slice thickness and increased number of excitations; utilized volume rendering methods to construct images; lowered thresholds for the beginning of the PCoA and AChA arteries, which represent the regions of interest. In all 11 cases that we operated on, cerebral aneurysms were diagnosed correctly and the minimum neck diameter of the cerebral aneurysm was 1.2 mm. In addition, the number of AChAs and PCoAs present in target MRA and in operational views were evaluated. In one case with an AChA aneurysm, a PCoA was not detected by target MRA, because the ICA deviated posterolaterally and pushed the PCoA to the posterior clinoid process, and blood flow was poor in operational views. In another 2 cases with AChA aneurysms, only one AChA was described in target MRA, whereas two aneurysms were present. However, one of these had a diameter less than 1 mm. In conclusion, this method offers an extremely useful aid

  9. Differentiating aneurysm from infundibular dilatation by volume rendering MRA. Techniques for improving depiction of the posterior communicating and anterior choroidal arteries

    International Nuclear Information System (INIS)

    Kato, Takaaki; Ito, Takeo; Hasunuma, Masahiro; Sakamoto, Yasuo; Kohama, Ikuhide; Yonemori, Terutake; Izumo, Masaki

    2002-01-01

    With the spread of brain dock procedures, non-invasive magnetic resonance angiography (MRA) is being utilized to broadly screen for brain blood vessel diseases. However, diagnosis of cerebral aneurysm can be difficult by routine MRA. In particular, differentiating aneurysms and infundibular dilatations (IDS) of the posterior communicating artery (PCoA) and anterior choroidal artery (AChA) at their bifurcations with the internal carotid artery (ICA) is extremely difficult and additional studies are frequently necessary. In this situation, three-dimensional computed tomography angiography (3D-CTA) and cerebral angiography have been utilized, but both techniques are invasive. Furthermore, images from cerebral angiography are only two-dimensional, and 3D-CTA requires differentiation between aneurysm and ID by observing configurational changes at the apex of the protrusion and by following gradual changes to the threshold. We therefore undertook the following steps to improve both depiction of the PCoA and AChA and differential diagnosis between aneurysm and ID: reduced slice thickness and increased number of excitations; utilized volume rendering methods to construct images; lowered thresholds for the beginning of the PCoA and AChA arteries, which represent the regions of interest. In all 11 cases that we operated on, cerebral aneurysms were diagnosed correctly and the minimum neck diameter of the cerebral aneurysm was 1.2 mm. In addition, the number of AChAs and PCoAs present in target MRA and in operational views were evaluated. In one case with an AChA aneurysm, a PCoA was not detected by target MRA, because the ICA deviated posterolaterally and pushed the PCoA to the posterior clinoid process, and blood flow was poor in operational views. In another 2 cases with AChA aneurysms, only one AChA was described in target MRA, whereas two aneurysms were present. However, one of these had a diameter less than 1 mm. In conclusion, this method offers an extremely useful aid

  10. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors.

    Directory of Open Access Journals (Sweden)

    Mingyue Luo

    Full Text Available To evaluate the diagnostic value of multidetector CT (MDCT and its multiplanar reformation (MPR, volume rendering (VR and virtual bronchoscopy (VB postprocessing techniques for primary trachea and main bronchus tumors.Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology.Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6, tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16, extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm-56.0 mm, n = 14; no clear border with right atelectasis, n = 1, longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm-68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1, morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1 and extents (mild, n = 5; moderate, n = 7; severe, n = 19, distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10, and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density

  11. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors.

    Science.gov (United States)

    Luo, Mingyue; Duan, Chaijie; Qiu, Jianping; Li, Wenru; Zhu, Dongyun; Cai, Wenli

    2015-01-01

    To evaluate the diagnostic value of multidetector CT (MDCT) and its multiplanar reformation (MPR), volume rendering (VR) and virtual bronchoscopy (VB) postprocessing techniques for primary trachea and main bronchus tumors. Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology. Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6), tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16), extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm-56.0 mm, n = 14; no clear border with right atelectasis, n = 1), longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm-68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1), morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1) and extents (mild, n = 5; moderate, n = 7; severe, n = 19), distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10), and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density without

  12. Second-order accurate volume-of-fluid algorithms for tracking material interfaces

    International Nuclear Information System (INIS)

    Pilliod, James Edward; Puckett, Elbridge Gerry

    2004-01-01

    We introduce two new volume-of-fluid interface reconstruction algorithms and compare the accuracy of these algorithms to four other widely used volume-of-fluid interface reconstruction algorithms. We find that when the interface is smooth (e.g., continuous with two continuous derivatives) the new methods are second-order accurate and the other algorithms are first-order accurate. We propose a design criteria for a volume-of-fluid interface reconstruction algorithm to be second-order accurate. Namely, that it reproduce lines in two space dimensions or planes in three space dimensions exactly. We also introduce a second-order, unsplit, volume-of-fluid advection algorithm that is based on a second-order, finite difference method for scalar conservation laws due to Bell, Dawson and Shubin. We test this advection algorithm by modeling several different interface shapes propagating in two simple incompressible flows and compare the results with the standard second-order, operator-split advection algorithm. Although both methods are second-order accurate when the interface is smooth, we find that the unsplit algorithm exhibits noticeably better resolution in regions where the interface has discontinuous derivatives, such as at corners

  13. Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.

    Science.gov (United States)

    Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho

    2011-01-01

    The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of 98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  14. SPAM-assisted partial volume correction algorithm for PET

    International Nuclear Information System (INIS)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul

    2000-01-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38±6%, while those of hippocampus and amygdala by 4±3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images

  15. SPAM-assisted partial volume correction algorithm for PET

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    2000-07-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38{+-}6%, while those of hippocampus and amygdala by 4{+-}3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images.

  16. Volumetric ambient occlusion for real-time rendering and games.

    Science.gov (United States)

    Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M

    2010-01-01

    This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.

  17. GPU Pro advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2010-01-01

    This book covers essential tools and techniques for programming the graphics processing unit. Brought to you by Wolfgang Engel and the same team of editors who made the ShaderX series a success, this volume covers advanced rendering techniques, engine design, GPGPU techniques, related mathematical techniques, and game postmortems. A special emphasis is placed on handheld programming to account for the increased importance of graphics on mobile devices, especially the iPhone and iPod touch.Example programs and source code can be downloaded from the book's CRC Press web page. 

  18. An algorithm to estimate the volume of the thyroid lesions using SPECT

    International Nuclear Information System (INIS)

    Pina, Jorge Luiz Soares de; Mello, Rossana Corbo de; Rebelo, Ana Maria

    2000-01-01

    An algorithm was developed to estimate the volume of the thyroid and its functioning lesions, that is, those which capture iodine. This estimate is achieved by the use of SPECT, Single Photon Emission Computed Tomography. The algorithm was written in an extended PASCAL language subset and was accomplished to run on Siemens ICON System, a special Macintosh environment that controls the tomographic image acquisition and processing. In spite of be developed for the Siemens DIACAN gamma camera, the algorithm can be easily adapted for the ECAN camera. These two Cameras models are among the most common ones used in Nuclear Medicine in Brazil Nowadays. A phantom study was used to validate the algorithm that have shown that a threshold of 42% of maximum pixel intensity of the images it is possible to estimate the volume of the phantoms with an error of 10% in the range of 30 to 70 ml. (author)

  19. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kolda, Tamara G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Wake Forest Univ., Winston-Salem, MA (United States); Ballard, Grey [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mahoney, Michael [Univ. of California, Berkeley, CA (United States)

    2018-01-01

    Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

  20. Effectiveness of the random sequential absorption algorithm in the analysis of volume elements with nanoplatelets

    DEFF Research Database (Denmark)

    Pontefisso, Alessandro; Zappalorto, Michele; Quaresimin, Marino

    2016-01-01

    In this work, a study of the Random Sequential Absorption (RSA) algorithm in the generation of nanoplatelet Volume Elements (VEs) is carried out. The effect of the algorithm input parameters on the reinforcement distribution is studied through the implementation of statistical tools, showing...... that the platelet distribution is systematically affected by these parameters. The consequence is that a parametric analysis of the VE input parameters may be biased by hidden differences in the filler distribution. The same statistical tools used in the analysis are implemented in a modified RSA algorithm...

  1. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2001-01-01

    This paper introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The paper emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  2. 3D rendering and interactive visualization technology in large industry CT

    International Nuclear Information System (INIS)

    Xiao Yongshun; Zhang Li; Chen Zhiqiang; Kang Kejun

    2002-01-01

    The author introduces the applications of interactive 3D rendering technology in the large ICT. It summarizes and comments on the iso-surfaces rendering and the direct volume rendering methods used in ICT. The author emphasizes on the technical analysis of the 3D rendering process of ICT volume data sets, and summarizes the difficulties of the inspection subsystem design in large ICT

  3. Free-viewpoint depth image based rendering

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; With, de P.H.N.

    2010-01-01

    In 3D TV research, one approach is to employ multiple cameras for creating a 3D multi-view signal with the aim to make interactive free-viewpoint selection possible in 3D TV media. This paper explores a new rendering algorithm that enables to compute a free-viewpoint between two reference views from

  4. Rendering the Topological Spines

    Energy Technology Data Exchange (ETDEWEB)

    Nieves-Rivera, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-05

    Many tools to analyze and represent high dimensional data already exits yet most of them are not flexible, informative and intuitive enough to help the scientists make the corresponding analysis and predictions, understand the structure and complexity of scientific data, get a complete picture of it and explore a greater number of hypotheses. With this in mind, N-Dimensional Data Analysis and Visualization (ND²AV) is being developed to serve as an interactive visual analysis platform with the purpose of coupling together a number of these existing tools that range from statistics, machine learning, and data mining, with new techniques, in particular with new visualization approaches. My task is to create the rendering and implementation of a new concept called topological spines in order to extend ND²AV's scope. Other existing visualization tools create a representation preserving either the topological properties or the structural (geometric) ones because it is challenging to preserve them both simultaneously. Overcoming such challenge by creating a balance in between them, the topological spines are introduced as a new approach that aims to preserve them both. Its render using OpenGL and C++ and is currently being tested to further on be implemented on ND²AV. In this paper I will present what are the Topological Spines and how they are rendered.

  5. Distributed rendering for multiview parallax displays

    Science.gov (United States)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  6. High Fidelity Haptic Rendering

    CERN Document Server

    Otaduy, Miguel A

    2006-01-01

    The human haptic system, among all senses, provides unique and bidirectional communication between humans and their physical environment. Yet, to date, most human-computer interactive systems have focused primarily on the graphical rendering of visual information and, to a lesser extent, on the display of auditory information. Extending the frontier of visual computing, haptic interfaces, or force feedback devices, have the potential to increase the quality of human-computer interaction by accommodating the sense of touch. They provide an attractive augmentation to visual display and enhance t

  7. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    Science.gov (United States)

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  8. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  9. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  10. ARE: Ada Rendering Engine

    Directory of Open Access Journals (Sweden)

    Stefano Penge

    2009-10-01

    Full Text Available E' ormai pratica diffusa, nello sviluppo di applicazioni web, l'utilizzo di template e di potenti template engine per automatizzare la generazione dei contenuti da presentare all'utente. Tuttavia a volte la potenza di tali engine è€ ottenuta mescolando logica e interfaccia, introducendo linguaggi diversi da quelli di descrizione della pagina, o addirittura inventando nuovi linguaggi dedicati.ARE (ADA Rendering Engine è€ pensato per gestire l'intero flusso di creazione del contenuto HTML/XHTML dinamico, la selezione del corretto template, CSS, JavaScript e la produzione dell'output separando completamente logica e interfaccia. I templates utilizzati sono puro HTML senza parti in altri linguaggi, e possono quindi essere gestiti e visualizzati autonomamente. Il codice HTML generato è€ uniforme e parametrizzato.E' composto da due moduli, CORE (Common Output Rendering Engine e ALE (ADA Layout Engine.Il primo (CORE viene utilizzato per la generazione OO degli elementi del DOM ed è pensato per aiutare lo sviluppatore nella produzione di codice valido rispetto al DTD utilizzato. CORE genera automaticamente gli elementi del DOM in base al DTD impostato nella configurazioneIl secondo (ALE viene utilizzato come template engine per selezionare automaticamente in base ad alcuni parametri (modulo, profilo utente, tipologia del nodo, del corso, preferenze di installazione il template HTML, i CSS e i file JavaScript appropriati. ALE permette di usare templates di default e microtemplates ricorsivi per semplificare il lavoro del grafico.I due moduli possono in ogni caso essere utilizzati indipendentemente l'uno dall'altro. E' possibile generare e renderizzare una pagina HTML utilizzando solo CORE oppure inviare gli oggetti CORE al template engine ALE che provvede a renderizzare la pagina HTML. Viceversa è possibile generare HTML senza utilizzare CORE ed inviarlo al template engine ALECORE è alla prima release ed è€ già utilizzato all

  11. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  12. Interactive Volume Rendering of Diffusion Tensor Data

    Energy Technology Data Exchange (ETDEWEB)

    Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred; Carmichael, Owen; Hamann, Bernd; Scheuermann, Gerik

    2007-03-30

    As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal was to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].

  13. Assessment of left ventricular function and volumes by myocardial perfusion scintigraphy - comparison of two algorithms

    International Nuclear Information System (INIS)

    Zajic, T.; Fischer, R.; Brink, I.; Moser, E.; Krause, T.; Saurbier, B.

    2001-01-01

    Aim: Left ventricular volume and function can be computed from gated SPECT myocardial perfusion imaging using emory cardiac toolbox (ECT) or gated SPECT quantification (GS-Quant). The aim of this study was to compare both programs with respect to their practical application, stability and precision on heart-models as well as in clinical use. Methods: The volumes of five cardiac models were calculated by ECT and GS-Quant. 48 patients (13 female, 35 male) underwent a one day stress-rest protocol and gated SPECT. From these 96 gated SPECT images, left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV) were estimated by ECT and GS-Quant. For 42 patients LVEF was also determined by echocardiography. Results: For the cardiac models the computed volumes showed high correlation with the model-volumes as well as high correlation between ECT and GS-Quant (r ≥0.99). Both programs underestimated the volume by approximately 20-30% independent of the ventricle-size. Calculating LVEF, EDV and ESV, GS-Quant and ECT correlated well to each other and to the LVEF estimated by echocardiography (r ≥0.86). LVEF values determined with ECT were about 10% higher than values determined with GS-Quant or echocardiography. The incorrect surfaces calculated by the automatic algorithm of GS-Quant for three examinations could not be corrected manually. 34 of the ECT studies were optimized by the operator. Conclusion: GS-Quant and ECT are two reliable programs in estimating LVEF. Both seem to underestimate the cardiac volume. In practical application GS-Quant was faster and easier to use. ECT allows the user to define the contour of the ventricle and thus is less susceptible to artifacts. (orig.) [de

  14. Transformative Rendering of Internet Resources

    Science.gov (United States)

    2012-10-01

    using either the Firefox or Google Chrome rendering engine. The rendering server then captures a screen shot of the page and creates code that positions...be compromised at web pages the hackers had built for that hacking competition to exploit that particular OS /browser configuration. During...of risk with no benefit. They include: - The rendering server is hosted on a Linux-based operating system ( OS ). The OS is much more secure than the

  15. CT two-dimensional reformation versus three-dimensional volume rendering with regard to surgical findings in the preoperative assessment of the ossicular chain in chronic suppurative otitis media

    International Nuclear Information System (INIS)

    Guo, Yong; Liu, Yang; Lu, Qiao-hui; Zheng, Kui-hong; Shi, Li-jing; Wang, Qing-jun

    2013-01-01

    Purpose: To assess the role of three-dimensional volume rendering (3DVR) in the preoperative assessment of the ossicular chain in chronic suppurative otitis media (CSOM). Materials and methods: Sixty-six patients with CSOM were included in this prospective study. Temporal bone was scanned with a 128-channel multidetector row CT and the axial data was transferred to the workstation for multiplanar reformation (MPR) and 3DVR reconstructions. Evaluation of the ossicular chain according to a three-point scoring system on two-dimensional reformation (2D) and 3DVR was performed independently by two radiologists. The evaluation results were compared with surgical findings. Results: 2D showed over 89% accuracy in the assessment of segmental absence of the ossicular chain in CSOM, no matter how small the segmental size was. 3DVR was as accurate as 2D for the assessment of segmental absence. However, 3DVR was found to be more accurate than 2D in the evaluation of partial erosion of segments. Conclusion: Both 3DVR and 2D are accurate and reliable for the assessment of the ossicular chain in CSOM. The inclusion of 3DVR images in the imaging protocol improves the accuracy of 2D in detecting ossicular erosion from CSOM

  16. Multidetector-row computed tomography in the preoperative diagnosis of intestinal complications caused by clinically unsuspected ingested dietary foreign bodies: a case series emphasizing the use of volume rendering techniques

    Energy Technology Data Exchange (ETDEWEB)

    Teixeira, Augusto Cesar Vieira; Torres, Ulysses dos Santos; Oliveira, Eduardo Portela de; Gual, Fabiana; Bauab Junior, Tufik, E-mail: usantor@yahoo.com.br [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Serv. de Radiologia e Diagnostico por Imagem; Westin, Carlos Eduardo Garcia [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Cirurgia Geral; Cardoso, Luciana Vargas [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Hospital de Base. Setor de Tomografia Computadorizada

    2013-11-15

    Objective: the present study was aimed at describing a case series where a preoperative diagnosis of intestinal complications secondary to accidentally ingested dietary foreign bodies was made by multidetector-row computed tomography (MDCT), with emphasis on complementary findings yielded by volume rendering techniques (VRT) and curved multiplanar reconstructions (MPR). Materials and Methods: The authors retrospectively assessed five patients with surgically confirmed intestinal complications (perforation and/or obstruction) secondary to unsuspected ingested dietary foreign bodies, consecutively assisted in their institution between 2010 and 2012. Demographic, clinical, laboratory and radiological data were analyzed. VRT and curved MPR were subsequently performed. Results: preoperative diagnosis of intestinal complications was originally performed in all cases. In one case the presence of a foreign body was not initially identified as the causal factor, and the use of complementary techniques facilitated its retrospective identification. In all cases these tools allowed a better depiction of the entire foreign bodies on a single image section, contributing to the assessment of their morphology. Conclusion: although the use of complementary techniques has not had a direct impact on diagnostic performance in most cases of this series, they may provide a better depiction of foreign bodies' morphology on a single image section. (author)

  17. CT two-dimensional reformation versus three-dimensional volume rendering with regard to surgical findings in the preoperative assessment of the ossicular chain in chronic suppurative otitis media

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yong, E-mail: guoyong27@hotmail.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Liu, Yang, E-mail: liuyangdoc@sina.com [Department of Otorhinolaryngology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Lu, Qiao-hui, E-mail: Luqiaohui465@126.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Zheng, Kui-hong, E-mail: zhengkuihong1971@sina.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Shi, Li-jing, E-mail: Shilijing2003@yahoo.com.cn [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China); Wang, Qing-jun, E-mail: wangqingjun77@163.com [Department of Radiology, Navy General Hospital, 6# Fucheng Road, Beijing 100048 (China)

    2013-09-15

    Purpose: To assess the role of three-dimensional volume rendering (3DVR) in the preoperative assessment of the ossicular chain in chronic suppurative otitis media (CSOM). Materials and methods: Sixty-six patients with CSOM were included in this prospective study. Temporal bone was scanned with a 128-channel multidetector row CT and the axial data was transferred to the workstation for multiplanar reformation (MPR) and 3DVR reconstructions. Evaluation of the ossicular chain according to a three-point scoring system on two-dimensional reformation (2D) and 3DVR was performed independently by two radiologists. The evaluation results were compared with surgical findings. Results: 2D showed over 89% accuracy in the assessment of segmental absence of the ossicular chain in CSOM, no matter how small the segmental size was. 3DVR was as accurate as 2D for the assessment of segmental absence. However, 3DVR was found to be more accurate than 2D in the evaluation of partial erosion of segments. Conclusion: Both 3DVR and 2D are accurate and reliable for the assessment of the ossicular chain in CSOM. The inclusion of 3DVR images in the imaging protocol improves the accuracy of 2D in detecting ossicular erosion from CSOM.

  18. Stereoscopy in diagnostic radiology and procedure planning: does stereoscopic assessment of volume-rendered CT angiograms lead to more accurate characterisation of cerebral aneurysms compared with traditional monoscopic viewing?

    International Nuclear Information System (INIS)

    Stewart, Nikolas; Lock, Gregory; Coucher, John; Hopcraft, Anthony

    2014-01-01

    Stereoscopic vision is a critical part of the human visual system, conveying more information than two-dimensional, monoscopic observation alone. This study aimed to quantify the contribution of stereoscopy in assessment of radiographic data, using widely available three-dimensional (3D)-capable display monitors by assessing whether stereoscopic viewing improved the characterisation of cerebral aneurysms. Nine radiology registrars were shown 40 different volume-rendered (VR) models of cerebral computed tomography angiograms (CTAs), each in both monoscopic and stereoscopic format and then asked to record aneurysm characteristics on short multiple-choice answer sheets. The monitor used was a current model commercially available 3D television. Responses were marked against a gold standard of assessments made by a consultant radiologist, using the original CT planar images on a diagnostic radiology computer workstation. The participants' results were fairly homogenous, with most showing no difference in diagnosis using stereoscopic VR models. One participant performed better on the monoscopic VR models. On average, monoscopic VRs achieved a slightly better diagnosis by 2.0%. Stereoscopy has a long history, but it has only recently become technically feasible for stored cross-sectional data to be adequately reformatted and displayed in this format. Scant literature exists to quantify the technology's possible contribution to medical imaging - this study attempts to build on this limited knowledge base and promote discussion within the field. Stereoscopic viewing of images should be further investigated and may well eventually find a permanent place in procedural and diagnostic medical imaging.

  19. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    Science.gov (United States)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  20. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  1. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  2. Sketchy Rendering for Information Visualization

    NARCIS (Netherlands)

    Wood, Jo; Isenberg, Petra; Isenberg, Tobias; Dykes, Jason; Boukhelifa, Nadia; Slingsby, Aidan

    2012-01-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These

  3. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  4. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  5. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  6. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  7. Integral image rendering procedure for aberration correction and size measurement.

    Science.gov (United States)

    Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion

    2014-05-20

    The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.

  8. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  9. Real-time photorealistic stereoscopic rendering of fire

    Science.gov (United States)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  10. Realistic Real-Time Outdoor Rendering in Augmented Reality

    Science.gov (United States)

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  11. Realistic real-time outdoor rendering in augmented reality.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available Realistic rendering techniques of outdoor Augmented Reality (AR has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps. Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  12. GPU PRO 3 Advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2012-01-01

    GPU Pro3, the third volume in the GPU Pro book series, offers practical tips and techniques for creating real-time graphics that are useful to beginners and seasoned game and graphics programmers alike. Section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Wessam Bahnassi, and Sebastien St-Laurent have once again brought together a high-quality collection of cutting-edge techniques for advanced GPU programming. With contributions by more than 50 experts, GPU Pro3: Advanced Rendering Techniques covers battle-tested tips and tricks for creating interesting geometry, realistic sha

  13. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  14. Advanced Material Rendering in Blender

    Czech Academy of Sciences Publication Activity Database

    Hatka, Martin; Haindl, Michal

    2012-01-01

    Roč. 11, č. 2 (2012), s. 15-23 ISSN 1081-1451 R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; CESNET(CZ) 409/2011 Institutional support: RVO:67985556 Keywords : realistic material rendering * bidirectional texture function * Blender Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/haindl-advanced material rendering in blender.pdf

  15. NUMERICAL ALGORITHMS AT NON-ZERO CHEMICAL POTENTIAL. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 19

    International Nuclear Information System (INIS)

    Blum, T.; Creutz, M.

    1999-01-01

    The RIKEN BNL Research Center hosted its 19th workshop April 27th through May 1, 1999. The topic was Numerical Algorithms at Non-Zero Chemical Potential. QCD at a non-zero chemical potential (non-zero density) poses a long-standing unsolved challenge for lattice gauge theory. Indeed, it is the primary unresolved issue in the fundamental formulation of lattice gauge theory. The chemical potential renders conventional lattice actions complex, practically excluding the usual Monte Carlo techniques which rely on a positive definite measure for the partition function. This ''sign'' problem appears in a wide range of physical systems, ranging from strongly coupled electronic systems to QCD. The lack of a viable numerical technique at non-zero density is particularly acute since new exotic ''color superconducting'' phases of quark matter have recently been predicted in model calculations. A first principles confirmation of the phase diagram is desirable since experimental verification is not expected soon. At the workshop several proposals for new algorithms were made: cluster algorithms, direct simulation of Grassman variables, and a bosonization of the fermion determinant. All generated considerable discussion and seem worthy of continued investigation. Several interesting results using conventional algorithms were also presented: condensates in four fermion models, SU(2) gauge theory in fundamental and adjoint representations, and lessons learned from strong; coupling, non-zero temperature and heavy quarks applied to non-zero density simulations

  16. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.

    2009-11-01

    Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions. © 2009 IEEE.

  17. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  18. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  19. Geometric optimization of thermoelectric coolers in a confined volume using genetic algorithms

    International Nuclear Information System (INIS)

    Cheng, Y.-H.; Lin, W.-K.

    2005-01-01

    The demand for thermoelectric coolers (TEC) has grown significantly because of the need for a steady, low-temperature operating environment for various electronic devices such as laser diodes, semiconductor equipment, infrared detectors and others. The cooling capacity and its coefficient of performance (COP) are both extremely important in considering applications. Optimizing the dimensions of the TEC legs provides the advantage of increasing the cooling capacity, while simultaneously considering its minimum COP. This study proposed a method of optimizing the dimensions of the TEC legs using genetic algorithms (GAs), to maximize the cooling capacity. A confined volume in which the TEC can be placed and the technological limitation in manufacturing a TEC leg were considered, and three parameters - leg length, leg area and the number of legs - were taken as the variables to be optimized. The constraints of minimum COP and maximum cost of the material were set, and a genetic search was performed to determine the optimal dimensions of the TEC legs. This work reveals that optimizing the dimensions of the TEC can increase its cooling capacity. The results also show that GAs can determine the optimal dimensions according to various input currents and various cold-side operating temperatures

  20. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  1. Sketchy Rendering for Information Visualization.

    Science.gov (United States)

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  2. The effect of depth compression on multiview rendering quality

    NARCIS (Netherlands)

    Merkle, P.; Morvan, Y.; Smolic, A.; Farin, D.S.; Mueller, K..; With, de P.H.N.; Wiegand, T.

    2010-01-01

    This paper presents a comparative study on different techniques for depth-image compression and its implications on the quality of multiview video plus depth virtual view rendering. A novel coding algorithm for depth images that concentrates on their special characteristics, namely smooth regions

  3. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    Science.gov (United States)

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  4. TransCut: interactive rendering of translucent cutouts.

    Science.gov (United States)

    Li, Dongping; Sun, Xin; Ren, Zhong; Lin, Stephen; Tong, Yiying; Guo, Baining; Zhou, Kun

    2013-03-01

    We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces—all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.

  5. A stable algorithm for calculating phase equilibria with capillarity at specified moles, volume and temperature using a dynamic model

    KAUST Repository

    Kou, Jisheng

    2017-09-30

    Capillary pressure can significantly affect the phase properties and flow of liquid-gas fluids in porous media, and thus, the phase equilibrium calculation incorporating capillary pressure is crucial to simulate such problems accurately. Recently, the phase equilibrium calculation at specified moles, volume and temperature (NVT-flash) becomes an attractive issue. In this paper, capillarity is incorporated into the phase equilibrium calculation at specified moles, volume and temperature. A dynamical model for such problem is developed for the first time by using the laws of thermodynamics and Onsager\\'s reciprocal principle. This model consists of the evolutionary equations for moles and volume, and it can characterize the evolutionary process from a non-equilibrium state to an equilibrium state in the presence of capillarity effect at specified moles, volume and temperature. The phase equilibrium equations are naturally derived. To simulate the proposed dynamical model efficiently, we adopt the convex-concave splitting of the total Helmholtz energy, and propose a thermodynamically stable numerical algorithm, which is proved to preserve the second law of thermodynamics at the discrete level. Using the thermodynamical relations, we derive a phase stability condition with capillarity effect at specified moles, volume and temperature. Moreover, we propose a stable numerical algorithm for the phase stability testing, which can provide the feasible initial conditions. The performance of the proposed methods in predicting phase properties under capillarity effect is demonstrated on various cases of pure substance and mixture systems.

  6. Evaluating the agreement between tumour volumetry and the estimated volumes of tumour lesions using an algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Laubender, Ruediger P. [German Cancer Consortium (DKTK), Heidelberg (Germany); University Hospital Munich - Campus Grosshadern, Institute of Medical Informatics, Biometry, and Epidemiology (IBE), Munich (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Lynghjem, Julia; D' Anastasi, Melvin; Graser, Anno [University Hospital Munich - Campus Grosshadern, Institute for Clinical Radiology, Munich (Germany); Heinemann, Volker; Modest, Dominik P. [University Hospital Munich - Campus Grosshadern, Department of Medical Oncology, Munich (Germany); Mansmann, Ulrich R. [University Hospital Munich - Campus Grosshadern, Institute of Medical Informatics, Biometry, and Epidemiology (IBE), Munich (Germany); Sartorius, Ute; Schlichting, Michael [Merck KGaA, Darmstadt (Germany)

    2014-07-15

    To evaluate the agreement between tumour volume derived from semiautomated volumetry (SaV) and tumor volume defined by spherical volume using longest lesion diameter (LD) according to Response Evaluation Criteria In Solid Tumors (RECIST) or ellipsoid volume using LD and longest orthogonal diameter (LOD) according to World Health Organization (WHO) criteria. Twenty patients with metastatic colorectal cancer from the CIOX trial were included. A total of 151 target lesions were defined by baseline computed tomography and followed until disease progression. All assessments were performed by a single reader. A variance component model was used to compare the three volume versions. There was a significant difference between the SaV and RECIST-based tumour volumes. The same model showed no significant difference between the SaV and WHO-based volumes. Scatter plots showed that the RECIST-based volumes overestimate lesion volume. The agreement between the SaV and WHO-based relative changes in tumour volume, evaluated by intraclass correlation, showed nearly perfect agreement. Estimating the volume of metastatic lesions using both the LD and LOD (WHO) is more accurate than those based on LD only (RECIST), which overestimates lesion volume. The good agreement between the SaV and WHO-based relative changes in tumour volume enables a reasonable approximation of three-dimensional tumour burden. (orig.)

  7. Democratizing rendering for multiple viewers in surround VR systems

    KAUST Repository

    Schulze, Jürgen P.

    2012-03-01

    We present a new approach for how multiple users\\' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users\\' head positions and orientations. © 2012 IEEE.

  8. Democratizing rendering for multiple viewers in surround VR systems

    KAUST Repository

    Schulze, Jü rgen P.; Acevedo-Feliz, Daniel; Mangan, John; Prudhomme, Andrew; Nguyen, Phi Khanh; Weber, Philip P.

    2012-01-01

    We present a new approach for how multiple users' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users' head positions and orientations. © 2012 IEEE.

  9. Enhancement method for rendered images of home decoration based on SLIC superpixels

    Science.gov (United States)

    Dai, Yutong; Jiang, Xiaotong

    2018-04-01

    Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.

  10. A kinesthetic washout filter for force-feedback rendering.

    Science.gov (United States)

    Danieau, Fabien; Lecuyer, Anatole; Guillotel, Philippe; Fleureau, Julien; Mollet, Nicolas; Christie, Marc

    2015-01-01

    Today haptic feedback can be designed and associated to audiovisual content (haptic-audiovisuals or HAV). Although there are multiple means to create individual haptic effects, the issue of how to properly adapt such effects on force-feedback devices has not been addressed and is mostly a manual endeavor. We propose a new approach for the haptic rendering of HAV, based on a washout filter for force-feedback devices. A body model and an inverse kinematics algorithm simulate the user's kinesthetic perception. Then, the haptic rendering is adapted in order to handle transitions between haptic effects and to optimize the amplitude of effects regarding the device capabilities. Results of a user study show that this new haptic rendering can successfully improve the HAV experience.

  11. Developing a Tile-Based Rendering Method to Improve Rendering Speed of 3D Geospatial Data with HTML5 and WebGL

    Directory of Open Access Journals (Sweden)

    Seokchan Kang

    2017-01-01

    Full Text Available A dedicated plug-in has been installed to visualize three-dimensional (3D city modeling spatial data in web-based applications. However, plug-in methods are gradually becoming obsolete, owing to their limited performance with respect to installation errors, unsupported cross-browsers, and security vulnerability. Particularly, in 2015, the NPAPI service was terminated in most existing web browsers except Internet Explorer. To overcome these problems, the HTML5/WebGL (next-generation web standard, confirmed in October 2014 technology emerged. In particular, WebGL is able to display 3D spatial data without plug-ins in browsers. In this study, we attempted to identify the requirements and limitations of displaying 3D city modeling spatial data using HTML5/WebGL, and we propose alternative ways based on the bin-packing algorithm that aggregates individual 3D city modeling data including buildings in tile units. The proposed method reduces the operational complexity and the number and volume of transmissions required for rendering processing to improve the speed of 3D data rendering. The proposed method was validated on real data for evaluating its effectiveness in 3D visualization of city modeling data in web-based applications.

  12. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander; Mikhalev, Alexander; Serdyukov, Pavel; Gusev, Gleb; Oseledets, Ivan

    2017-01-01

    preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a

  13. ACCURATUM: improved calcium volume scoring using a mesh-based algorithm - a phantom study

    International Nuclear Information System (INIS)

    Saur, Stefan C.; Szekely, Gabor; Alkadhi, Hatem; Desbiolles, Lotus; Cattin, Philippe C.

    2009-01-01

    To overcome the limitations of the classical volume scoring method for quantifying coronary calcifications, including accuracy, variability between examinations, and dependency on plaque density and acquisition parameters, a mesh-based volume measurement method has been developed. It was evaluated and compared with the classical volume scoring method for accuracy, i.e., the normalized volume (measured volume/ground-truthed volume), and for variability between examinations (standard deviation of accuracy). A cardiac computed-tomography (CT) phantom containing various cylindrical calcifications was scanned using different tube voltages and reconstruction kernels, at various positions and orientations on the CT table and using different slice thicknesses. Mean accuracy for all plaques was significantly higher (p<0.0001) for the proposed method (1.220±0.507) than for the classical volume score (1.896±1.095). In contrast to the classical volume score, plaque density (p=0.84), reconstruction kernel (p=0.19), and tube voltage (p=0.27) had no impact on the accuracy of the developed method. In conclusion, the method presented herein is more accurate than classical calcium scoring and is less dependent on tube voltage, reconstruction kernel, and plaque density. (orig.)

  14. Visualization of plasma collision phenomenon by particle based rendering

    International Nuclear Information System (INIS)

    Yamamoto, Takeshi; Takagishi, Hironori; Hasegawa, Kyoko; Nakata, Susumu; Tanaka, Satoshi; Tanaka, Kazuo

    2012-01-01

    In this paper, we visualize plasma collision phenomenon based on XYT-space (space and time) volume data for supporting research in plasma physics. We create 3D volume data in the XYT-space by piling up a time series of XY-plane photo images taken in experiment. As a result, we can visualize as one still image all the time behavior of the plasma plume. Besides, we adopt 'fused' visualization based on particle based rendering technique. Using that technique, we can easily fuse volume rendering different materials, and compare physics of different elements in flexible ways. In addition, we propose the method to generate pseudo-3D images from pictures shoot by ICCD of two perspectives on the upper and side. (author)

  15. Hybrid fur rendering: combining volumetric fur with explicit hair strands

    DEFF Research Database (Denmark)

    Andersen, Tobias Grønbeck; Falster, Viggo; Frisvad, Jeppe Revall

    2016-01-01

    Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination...... with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly...... defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur...

  16. Binaural Rendering in MPEG Surround

    Directory of Open Access Journals (Sweden)

    Kristofer Kjörling

    2008-04-01

    Full Text Available This paper describes novel methods for evoking a multichannel audio experience over stereo headphones. In contrast to the conventional convolution-based approach where, for example, five input channels are filtered using ten head-related transfer functions, the current approach is based on a parametric representation of the multichannel signal, along with either a parametric representation of the head-related transfer functions or a reduced set of head-related transfer functions. An audio scene with multiple virtual sound sources is represented by a mono or a stereo downmix signal of all sound source signals, accompanied by certain statistical (spatial properties. These statistical properties of the sound sources are either combined with statistical properties of head-related transfer functions to estimate “binaural parameters” that represent the perceptually relevant aspects of the auditory scene or used to create a limited set of combined head-related transfer functions that can be applied directly on the downmix signal. Subsequently, a binaural rendering stage reinstates the statistical properties of the sound sources by applying the estimated binaural parameters or the reduced set of combined head-related transfer functions directly on the downmix. If combined with parametric multichannel audio coders such as MPEG Surround, the proposed methods are advantageous over conventional methods in terms of perceived quality and computational complexity.

  17. MR renography : An algorithm for calculation and correction of cortical volume averaging in medullary renographs

    NARCIS (Netherlands)

    de Priester, JA; den Boer, JA; Giele, ELW; Christiaans, MHL; Kessels, A; Hasman, A; van Engelshoven, JMA

    We evaluated a mathematical algorithm for the generation of medullary signal from raw dynamic magnetic resonance (MR) data. Five healthy volunteers were studied. MR examination consisted of a run of 100 TI-weighted coronal scans (gradient echo: TR/TE 11/3.4 msec, flip angle 60 degrees; slice

  18. RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research.

    Science.gov (United States)

    Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H

    2014-02-07

    RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.

  19. Quantum Algorithms for Computational Physics: Volume 3 of Lattice Gas Dynamics

    Science.gov (United States)

    2007-01-03

    by the “divine” Greek mathematician Pythagoras . Today, most people normally think of the square root operation as it applies to a positive number...attempted to prove the age old isoperimetric theorem of geometry. It may be loosely stated as, a circle is the optimal closed curve, because of the...Ĥn, Ĥm] 6= 0), as shown by the Campbell-Baker-Hausdorff theorem . Nevertheless, in certain special cases (type-I quantum algorithms) we are able

  20. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    Science.gov (United States)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); hide

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  1. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  2. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    Science.gov (United States)

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  3. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  4. Moisture movements in render on brick wall

    DEFF Research Database (Denmark)

    Hansen, Kurt Kielsgaard; Munch, Thomas Astrup; Thorsen, Peter Schjørmann

    2003-01-01

    A three-layer render on brick wall used for building facades is studied in the laboratory. The vertical render surface is held in contact with water for 24 hours simulating driving rain while it is measured with non-destructive X-ray equipment every hour in order to follow the moisture front...

  5. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander

    2017-02-07

    Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.

  6. Physically based rendering from theory to implementation

    CERN Document Server

    Pharr, Matt

    2010-01-01

    "Physically Based Rendering, 2nd Edition" describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation. A method - known as 'literate programming'- combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, you will learn to design and employ a full-featured rendering system for creating stunning imagery. This book features new sections on subsurface scattering, Metropolis light transport, precomputed light transport, multispectral rendering, and much more. It includes a companion site complete with source code for the rendering system described in the book, with support for Windows, OS X, and Linux. Code and text are tightly woven together through a unique indexing feature that lists each function, variable, and method on the page that they are first described.

  7. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  8. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  9. Real-time interactive three-dimensional display of CT and MR imaging volume data

    International Nuclear Information System (INIS)

    Yla-Jaaski, J.; Kubler, O.; Kikinis, R.

    1987-01-01

    Real-time reconstruction of surfaces from CT and MR imaging volume data is demonstrated using a new algorithm and implementation in a parallel computer system. The display algorithm accepts noncubic 16-bit voxels directly as input. Operations such as interpolation, classification by thresholding, depth coding, simple lighting effects, and removal of parts of the volume by clipping planes are all supported on-line. An eight-processor implementation of the algorithm renders surfaces from typical CT data sets in real time to allow interactive rotation of the volume

  10. Emotion rendering in auditory simulations of imagined walking styles

    DEFF Research Database (Denmark)

    Turchet, Luca; Rodá, Antonio

    2016-01-01

    This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to imagined walking scenarios, five emotions (aggressive, happy, neutral, sad......, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types. Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal...... distance between consecutive steps. Results were in accordance with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify the emotions...

  11. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, L. (ed.)

    1979-01-01

    The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.

  12. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

    International Nuclear Information System (INIS)

    Thomas, L.

    1979-01-01

    The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations

  13. Volume-of-fluid algorithm on a non-orthogonal grid

    International Nuclear Information System (INIS)

    Jang, W.; Lien, F.S.; Ji, H.

    2005-01-01

    In the present study, a novel VOF method on a non-orthogonal grid is proposed and tested for several benchmark problems, including a simple translation test, a reversed single vortex flow and a shearing flow, with the objective to demonstrate the feasibility and accuracy of the present approach. Excellent agreement between the solutions obtained on both orthogonal and non-orthogonal meshes is achieved. The sensitivity of various methods to the L 1 error in evaluating the interface normal and volume flux at each face of a non-orthogonal cell is examined. Time integration methods based on the operator-splitting approach in curvilinear coordinates, including the explicit-implicit (EX-IM) and explicit-explicit (EX-EX) combinations, are tested. (author)

  14. SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Martin, R; Pan, T [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signal and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.

  15. An algorithm based on OmniView technology to reconstruct sagittal and coronal planes of the fetal brain from volume datasets acquired by three-dimensional ultrasound.

    Science.gov (United States)

    Rizzo, G; Capponi, A; Pietrolucci, M E; Capece, A; Aiello, E; Mammarella, S; Arduini, D

    2011-08-01

    To describe a novel algorithm, based on the new display technology 'OmniView', developed to visualize diagnostic sagittal and coronal planes of the fetal brain from volumes obtained by three-dimensional (3D) ultrasonography. We developed an algorithm to image standard neurosonographic planes by drawing dissecting lines through the axial transventricular view of 3D volume datasets acquired transabdominally. The algorithm was tested on 106 normal fetuses at 18-24 weeks of gestation and the visualization rates of brain diagnostic planes were evaluated by two independent reviewers. The algorithm was also applied to nine cases with proven brain defects. The two reviewers, using the algorithm on normal fetuses, found satisfactory images with visualization rates ranging between 71.7% and 96.2% for sagittal planes and between 76.4% and 90.6% for coronal planes. The agreement rate between the two reviewers, as expressed by Cohen's kappa coefficient, was > 0.93 for sagittal planes and > 0.89 for coronal planes. All nine abnormal volumes were identified by a single observer from among a series including normal brains, and eight of these nine cases were diagnosed correctly. This novel algorithm can be used to visualize standard sagittal and coronal planes in the fetal brain. This approach may simplify the examination of the fetal brain and reduce dependency of success on operator skill. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  16. An improved method of continuous LOD based on fractal theory in terrain rendering

    Science.gov (United States)

    Lin, Lan; Li, Lijun

    2007-11-01

    With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.

  17. RenderGAN: Generating Realistic Labeled Data

    Directory of Open Access Journals (Sweden)

    Leon Sixt

    2018-06-01

    Full Text Available Deep Convolutional Neuronal Networks (DCNNs are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g., lighting, background, and detail are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.

  18. Iso-surface volume rendering for implant surgery

    NARCIS (Netherlands)

    van Foreest-Timp, Sheila; Lemke, H.U.; Inamura, K.; Doi, K.; Vannier, M.W.; Farman, A.G.

    2001-01-01

    Many clinical situations ask for the simultaneous visualization of anatomical surfaces and synthetic meshes. Common examples include hip replacement surgery, intra-operative visualization of surgical instruments or probes, visualization of planning information, or implant surgery. To be useful for

  19. Clustered deep shadow maps for integrated polyhedral and volume rendering

    KAUST Repository

    Bornik, Alexander; Knecht, Wolfgang; Hadwiger, Markus; Schmalstieg, Dieter

    2012-01-01

    This paper presents a hardware-accelerated approach for shadow computation in scenes containing both complex volumetric objects and polyhedral models. Our system is the first hardware accelerated complete implementation of deep shadow maps, which

  20. Sigmoid Colon Elongation Evaluation by Volume Rendering Technique

    Directory of Open Access Journals (Sweden)

    Atilla SENAYLI

    2011-06-01

    Full Text Available Sigmoid colons have various measurements, shapes, and configurations for individuals. In this subject there are rare clinical trials to answer the question of sigmoidal colon maldevelopment predicting a risk for volvulus. Therefore, sigmoid colon measurement may be beneficial to decide for volvulus. In a study, sigmoid colon diameters were evaluated during abdominal surgeries and it was found that median length was 47 cm and median vertical mesocolon length was 13 cm. We report a 14-year-old female patient who has a sigmoidal colon measured as nearly 54 cm. We used tomographic equipments for this evaluation. We know that MRI technique was used for this purpose but, there has not been data for MRI predicting the sigmoidal volvulus. We hope that our findings by this evaluation can contribute to insufficient literature of sigmoidal elongation. [J Contemp Med 2011; 1(2.000: 71-73

  1. Standardized rendering from IR surveillance motion imagery

    Science.gov (United States)

    Prokoski, F. J.

    2014-06-01

    Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.

  2. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  3. Earth mortars and earth-lime renders

    Directory of Open Access Journals (Sweden)

    Maria Fernandes

    2008-01-01

    Full Text Available Earth surface coatings play a decorative architectural role, apart from their function as wall protection. In Portuguese vernacular architecture, earth mortars were usually applied on stone masonry, while earth renders and plasters were used on indoors surface coatings. Limestone exists only in certain areas of the country and consequently lime was not easily available everywhere, especially on granite and schist regions where stone masonry was a current building technique. In the central west coast of Portugal, the lime slaking procedure entailed slaking the quicklime mixed with earth (sandy soil, in a pit; the resulting mixture would then be combined in a mortar or plaster. This was also the procedure for manufactured adobes stabilized with lime. Adobe buildings with earth-lime renderings and plasters were also traditional in the same region, using lime putty and lime wash for final coat and decoration. Classic decoration on earth architecture from the 18th-19th century was in many countries a consequence of the François Cointeraux (1740-1830 manuals - Les Cahiers d'Architecture Rurale" (1793 - a French guide for earth architecture and building construction. This manual arrived to Portugal in the beginning of XIX century, but was never translated to Portuguese. References about decoration for earth houses were explained on this manual, as well as procedures about earth-lime renders and ornamentation of earth walls; in fact, these procedures are exactly the same as the ones used in adobe buildings in this Portuguese region. The specific purpose of the present paper is to show some cases of earth mortars, renders and plasters on stone buildings in Portugal and to explain the methods of producing earth-lime renders, and also to show some examples of rendering and coating with earth-lime in Portuguese adobe vernacular architecture.

  4. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, V.S.; Bedford, J.L.; Webb, S.; Dearnaley, D.P.

    1997-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three dimensional (3D) margin growing algorithm compared to a two dimensional (2D) margin growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of ten patients with localized prostate cancer: prostate gland only (PO) and prostate with seminal vesicles (PSV). A margin of 10 mm was applied to these two groups (PO and PSV) using both the 2D and 3D margin growing algorithms. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. Adequacy of geometric coverage of the GTV with the two algorithms was examined throughout the target volume. Discrepancies between the two margin methods were measured in the transaxial plane. Results: The 2D algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D algorithm. For both the PO and PSV groups, the inferior coverage of the PTV was consistently underestimated by the 2D margin algorithm when compared to the 3D margins with a mean radial distance of 4.8 mm (range 0-10). In the central region of the prostate gland, the anterior, posterior, and lateral PTV borders were underestimated with the 2D margin in both the PO and PSV groups by a mean of 3.6 mm (range 0-9), 2.1 mm (range 0-8), and 1.8 (range 0-9) respectively. The PTV coverage of the PO group superiorly was radially underestimated by 4.5mm (range 0-14) when comparing the 2D margins to the 3D margins. For the PSV group, the junction region between the prostate and the seminal vesicles was underestimated by the 2D margin by a mean transaxial distance of 18.1 mm in the anterior PTV border (range 4-30), 7.2 mm posteriorly (range 0-20), and 3.7 mm laterally (range 0-14). The superior region of the seminal vesicles in the PSV group was also consistently underestimated with a radial discrepancy of 3.3 mm

  5. Digital color acquisition, perception, coding and rendering

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    In this book the authors identify the basic concepts and recent advances in the acquisition, perception, coding and rendering of color. The fundamental aspects related to the science of colorimetry in relation to physiology (the human visual system) are addressed, as are constancy and color appearance. It also addresses the more technical aspects related to sensors and the color management screen. Particular attention is paid to the notion of color rendering in computer graphics. Beyond color, the authors also look at coding, compression, protection and quality of color images and videos.

  6. Haptic rendering for simulation of fine manipulation

    CERN Document Server

    Wang, Dangxiao; Zhang, Yuru

    2014-01-01

    This book introduces the latest progress in six degrees of freedom (6-DoF) haptic rendering with the focus on a new approach for simulating force/torque feedback in performing tasks that require dexterous manipulation skills. One of the major challenges in 6-DoF haptic rendering is to resolve the conflict between high speed and high fidelity requirements, especially in simulating a tool interacting with both rigid and deformable objects in a narrow space and with fine features. The book presents a configuration-based optimization approach to tackle this challenge. Addressing a key issue in man

  7. Blender cycles lighting and rendering cookbook

    CERN Document Server

    Iraci, Bernardo

    2013-01-01

    An in-depth guide full of step-by-step recipes to explore the concepts behind the usage of Cycles. Packed with illustrations, and lots of tips and tricks; the easy-to-understand nature of the book will help the reader understand even the most complex concepts with ease.If you are a digital artist who already knows your way around Blender, and you want to learn about the new Cycles' rendering engine, this is the book for you. Even experts will be able to pick up new tips and tricks to make the most of the rendering capabilities of Cycles.

  8. GPU Pro 5 advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2014-01-01

    In GPU Pro5: Advanced Rendering Techniques, section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Michal Valient, Wessam Bahnassi, and Marius Bjorge have once again assembled a high-quality collection of cutting-edge techniques for advanced graphics processing unit (GPU) programming. Divided into six sections, the book covers rendering, lighting, effects in image space, mobile devices, 3D engine design, and compute. It explores rasterization of liquids, ray tracing of art assets that would otherwise be used in a rasterized engine, physically based area lights, volumetric light

  9. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, Vincent S.; Bedford, James L.; Webb, Steve; Dearnaley, David P.

    1998-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three-dimensional (3D) margin-growing algorithm compared to a two-dimensional (2D) margin-growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of 10 patients with localized prostate cancer; prostate gland only (PO) and prostate with seminal vesicles (PSV). A predetermined margin of 10 mm was applied to these two groups (PO and PSV) using both 2D and 3D margin-growing algorithms. The 2D algorithm added a transaxial margin to each GTV slice, whereas the 3D algorithm added a volumetric margin all around the GTV. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. The adequacy of geometric coverage of the GTV by the two algorithms was examined in a series of transaxial planes throughout the target volume. Results: The 2D margin-growing algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D-margin algorithm. For the PO group, the mean transaxial difference between the 2D and 3D algorithm was 3.8 mm inferiorly (range 0-20), 1.8 mm centrally (range 0-9), and 4.4 mm superiorly (range 0-22). Considering all of these regions, the mean discrepancy anteriorly was 5.1 mm (range 0-22), posteriorly 2.2 (range 0-20), right border 2.8 mm (range 0-14), and left border 3.1 mm (range 0-12). For the PSV group, the mean discrepancy in the inferior region was 3.8 mm (range 0-20), central region of the prostate was 1.8 mm ( range 0-9), the junction region of the prostate and the seminal vesicles was 5.5 mm (range 0-30), and the superior region of the seminal vesicles was 4.2 mm (range 0-55). When the different borders were considered in the PSV group, the mean discrepancies for the anterior, posterior, right, and left borders were 6.4 mm (range 0-55), 2.5 mm (range 0-20), 2.6 mm (range 0-14), and 3

  10. Effect of the averaging volume and algorithm on the in situ electric field for uniform electric- and magnetic-field exposures

    International Nuclear Information System (INIS)

    Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu

    2010-01-01

    The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)

  11. Fast rendering of scanned room geometries

    DEFF Research Database (Denmark)

    Olesen, Søren Krarup; Markovic, Milos; Hammershøi, Dorte

    2014-01-01

    Room acoustics are rendered in Virtual Realities based on models of the real world. These are typically rather coarse representations of the true geometry resulting in room impulse responses with a lack of natural detail. This problem can be overcome by using data scanned by sensors, such as e...

  12. Rendering Visible: Painting and Sexuate Subjectivity

    Science.gov (United States)

    Daley, Linda

    2015-01-01

    In this essay, I examine Luce Irigaray's aesthetic of sexual difference, which she develops by extrapolating from Paul Klee's idea that the role of painting is to render the non-visible rather than represent the visible. This idea is the premise of her analyses of phenomenology and psychoanalysis and their respective contributions to understanding…

  13. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  14. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  15. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  16. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  17. RAY TRACING RENDER MENGGUNAKAN FRAGMENT ANTI ALIASING

    Directory of Open Access Journals (Sweden)

    Febriliyan Samopa

    2008-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Rendering is generating surface and three-dimensional effects on an object displayed on a monitor screen. Ray tracing as a rendering method that traces ray for each image pixel has a drawback, that is, aliasing (jaggies effect. There are some methods for executing anti aliasing. One of those methods is OGSS (Ordered Grid Super Sampling. OGSS is able to perform aliasing well. However, this method requires more computation time since sampling of all pixels in the image will be increased. Fragment Anti Aliasing (FAA is a new alternative method that can cope with the drawback. FAA will check the image when performing rendering to a scene. Jaggies effect is only happened at curve and gradient object. Therefore, only this part of object that will experience sampling magnification. After this sampling magnification and the pixel values are computed, then downsample is performed to retrieve the original pixel values. Experimental results show that the software can implement ray tracing well in order to form images, and it can implement FAA and OGSS technique to perform anti aliasing. In general, rendering using FAA is faster than using OGSS

  18. Variability of left ventricular ejection fraction and volumes with quantitative gated SPECT: influence of algorithm, pixel size and reconstruction parameters in small and normal-sized hearts

    International Nuclear Information System (INIS)

    Hambye, Anne-Sophie; Vervaet, Ann; Dobbeleir, Andre

    2004-01-01

    Several software packages are commercially available for quantification of left ventricular ejection fraction (LVEF) and volumes from myocardial gated single-photon emission computed tomography (SPECT), all of which display a high reproducibility. However, their accuracy has been questioned in patients with a small heart. This study aimed to evaluate the performances of different software and the influence of modifications in acquisition or reconstruction parameters on LVEF and volume measurements, depending on the heart size. In 31 patients referred for gated SPECT, 64 2 and 128 2 matrix acquisitions were consecutively obtained. After reconstruction by filtered back-projection (Butterworth, 0.4, 0.5 or 0.6 cycles/cm cut-off, order 6), LVEF and volumes were computed with different software [three versions of Quantitative Gated SPECT (QGS), the Emory Cardiac Toolbox (ECT) and the Stanford University (SU-Segami) Medical School algorithm] and processing workstations. Depending upon their end-systolic volume (ESV), patients were classified into two groups: group I (ESV>30 ml, n=14) and group II (ESV 2 to 128 2 were associated with significantly larger volumes as well as lower LVEF values. Increasing the filter cut-off frequency had the same effect. With SU-Segami, a larger matrix was associated with larger end-diastolic volumes and smaller ESVs, resulting in a highly significant increase in LVEF. Increasing the filter sharpness, on the other hand, had no influence on LVEF though the measured volumes were significantly larger. (orig.)

  19. Technical Report Series on Global Modeling and Data Assimilation. Volume 12; Comparison of Satellite Global Rainfall Algorithms

    Science.gov (United States)

    Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.

    1997-01-01

    Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.

  20. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  1. Rendering Falling Leaves on Graphics Hardware

    OpenAIRE

    Marcos Balsa; Pere-Pau Vázquez

    2008-01-01

    There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount...

  2. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  3. Emission of VOC's from modified rendering process

    International Nuclear Information System (INIS)

    Bhatti, Z.A.; Raja, I.A.; Saddique, M.; Langenhove, H.V.

    2005-01-01

    Rendering technique for processing of dead animal and slaughterhouse wastes into valuable products. It involves cooking of raw material and later Sterilization was added to reduce the Bovine Spongiform Encephalopathy (BSE). Studies have been carried out on rendering emission, with the normal cooking process. Our study shows, that the sterilization step in rendering process increases the emission of volatile organic compounds (VOC's). Gas samples, containing VOC's, were analyzed by the GC/MS (Gas Chromatograph and Mass Spectrometry). The most important groups of compounds- alcohols and cyclic hydrocarbons were identified. In the group of alcohol; 1-butanol, l-pentanol and l-hexanol compounds were found while in the group of cyclic hydrocarbon; methyl cyclopentane and cyclohexane compounds were detected. Other groups like aldehyde, sulphur containing compounds, ketone and furan were also found. Some compounds, like l-pentanol, 2-methyl propanal, dimethyl disulfide and dimethyl trisulfide, which belong to these groups, cause malodor. It is important to know these compounds to treat odorous gasses. (author)

  4. Non-Photorealistic Rendering in Chinese Painting of Animals

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A set of algorithms is proposed in this paper to automatically transform 3D animal models to Chinese painting style. Inspired by real painting process in Chinese painting of animals, we divide the whole rendering process into two parts: borderline stroke making and interior shading. In borderline stroke making process we first find 3D model silhouettes in real-time depending on the viewing direction of a user. After retrieving silhouette information from all model edges, a stroke linking mechanism is applied to link these independent edges into a long stroke. Finally we grow a plain thin silhouette line to a stylus stroke with various widths at each control point and a 2D brush model is combined with it to simulate a Chinese painting stroke. In the interior shading pipeline, three stages are used to convert a Gouraud-shading image to a Chinese painting style image: color quantization, ink diffusion and box filtering. The color quantization stage assigns all pixels in an image into four color levels and each level represents a color layer in a Chinese painting. Ink diffusion stage is used to transfer inks and water between different levels and to grow areas in an irregular way. The box filtering stage blurs sharp borders between different levels to embellish the appearance of final interior shading image. In addition to automatic rendering, an interactive Chinese painting system which is equipped with friendly input devices can be also combined to generate more artistic Chinese painting images manually.

  5. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  6. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.; Martin, T.; Grosset, A. V. P.; Smith, S. T.; Hansen, C. D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  7. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  8. GPU Pro 4 advanced rendering techniques

    CERN Document Server

    Engel, Wolfgang

    2013-01-01

    GPU Pro4: Advanced Rendering Techniques presents ready-to-use ideas and procedures that can help solve many of your day-to-day graphics programming challenges. Focusing on interactive media and games, the book covers up-to-date methods producing real-time graphics. Section editors Wolfgang Engel, Christopher Oat, Carsten Dachsbacher, Michal Valient, Wessam Bahnassi, and Sebastien St-Laurent have once again assembled a high-quality collection of cutting-edge techniques for advanced graphics processing unit (GPU) programming. Divided into six sections, the book begins with discussions on the abi

  9. D.Vanwijnsberghe, Autour de la Madeleine Renders

    Directory of Open Access Journals (Sweden)

    Muriel Verbeeck-Boutin

    2008-10-01

    Full Text Available Institution fédérale belge de réputation internationale, l’Institut royal du Patrimoine artistique, à Bruxelles,  célèbre cette année son soixantième anniversaire: c’est l’occasion de rappeler le prestige dont jouit depuis des décennies cet institut de recherche, de formation et de diffusion du savoir. Pour souligner l’événement, l’IRPA publie le quatrième volume de la collection Scientia Artis. Il présente sous le titre Autour de la Madeleine Renders un ensemble de recherches qui documentent...

  10. Conservation of old renderings - the consolidation of rendering with loss of cohesion

    Directory of Open Access Journals (Sweden)

    Martha Tavares

    2008-01-01

    Full Text Available The study of external renderings in the scope of conservation and restoration has acquired in the last years great methodological, scientific and technical advances. These renderings are important elements of the built structure, for besides possessing a protection function, they possess often a decorative function of great relevance for the image of the monument. The maintenance of these renderings implies the conservation of traditional constructive techniques and the use of compatible materials, as similar to the originals as possible. The main objective of this study is to define a methodology of conservative restoration using strategies of maintenance of renderings and traditional constructive techniques. The minimum intervention principle is maintained as well as the use of materials compatible with the original ones. This paper describes the technique and products used for the consolidation of the loss of cohesion. The testing campaign was developed under controlled conditions, in laboratory, and in situ in order to evaluate their efficacy for the consolidation of old renders. A set of tests is presented to evaluate the effectiveness of the process. The results are analysed and a reflection is added referring to the applicability of these techniques. Finally the paper presents a proposal for further research.

  11. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  12. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    Science.gov (United States)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  13. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    Science.gov (United States)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  14. SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study

    Science.gov (United States)

    Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)

    1992-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.

  15. Validation of a colour rendering index based on memory colours

    OpenAIRE

    Smet, Kevin; Jost-Boissard, Sophie; Ryckaert, Wouter; Deconinck, Geert; Hanselaer, Peter

    2010-01-01

    In this paper the performance of a colour rendering index based on memory colours is investigated in comparison with the current CIE Colour Rendering Index, the NIST Colour Quality Scale and visual appreciation results obtained at CNRS at Lyon University for a set of 3000K and 4000K LED light sources. The Pearson and Spearman correlation coefficients between each colour rendering metric and the two sets of visual results were calculated. It was found that the memory colour based colour render...

  16. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    Science.gov (United States)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  17. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    Energy Technology Data Exchange (ETDEWEB)

    Riggi, S., E-mail: simone.riggi@ct.infn.it [INAF—Osservatorio Astrofisico di Catania (Italy); Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A. [INAF—Osservatorio Astrofisico di Catania (Italy); La Rocca, P. [Dip. di Fisica e Astronomia, Università di Catania (Italy); INFN Section of Catania (Italy); Massimino, P. [INAF—Osservatorio Astrofisico di Catania (Italy); Petta, C. [Dip. di Fisica e Astronomia, Università di Catania (Italy); INFN Section of Catania (Italy); Pistagna, C. [INAF—Osservatorio Astrofisico di Catania (Italy); Riggi, F. [Dip. di Fisica e Astronomia, Università di Catania (Italy); INFN Section of Catania (Italy); Sciacca, E.; Vitello, F. [INAF—Osservatorio Astrofisico di Catania (Italy)

    2013-11-11

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  18. Photon Differential Splatting for Rendering Caustics

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Schjøth, Lars; Erleben, Kenny

    2014-01-01

    We present a photon splatting technique which reduces noise and blur in the rendering of caustics. Blurring of illumination edges is an inherent problem in photon splatting, as each photon is unaware of its neighbours when being splatted. This means that the splat size is usually based...... on heuristics rather than knowledge of the local flux density. We use photon differentials to determine the size and shape of the splats such that we achieve adaptive anisotropic flux density estimation in photon splatting. As compared to previous work that uses photon differentials, we present the first method...... where no photons or beams or differentials need to be stored in a map. We also present improvements in the theory of photon differentials, which give more accurate results and a faster implementation. Our technique has good potential for GPU acceleration, and we limit the number of parameters requiring...

  19. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  20. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  1. About the use of the Monte-Carlo code based tracing algorithm and the volume fraction method for S n full core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gurevich, M. I.; Oleynik, D. S. [RRC Kurchatov Inst., Kurchatov Sq., 1, 123182, Moscow (Russian Federation); Russkov, A. A.; Voloschenko, A. M. [Keldysh Inst. of Applied Mathematics, Miusskaya Sq., 4, 125047, Moscow (Russian Federation)

    2006-07-01

    The tracing algorithm that is implemented in the geometrical module of Monte-Carlo transport code MCU is applied to calculate the volume fractions of original materials by spatial cells of the mesh that overlays problem geometry. In this way the 3D combinatorial geometry presentation of the problem geometry, used by MCU code, is transformed to the user defined 2D or 3D bit-mapped ones. Next, these data are used in the volume fraction (VF) method to approximate problem geometry by introducing additional mixtures for spatial cells, where a few original materials are included. We have found that in solving realistic 2D and 3D core problems a sufficiently fast convergence of the VF method takes place if the spatial mesh is refined. Virtually, the proposed variant of implementation of the VF method seems as a suitable geometry interface between Monte-Carlo and S{sub n} transport codes. (authors)

  2. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  3. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  4. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    Science.gov (United States)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  5. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  6. FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis.

    Science.gov (United States)

    Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles

    2017-05-26

    Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.

  7. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    International Nuclear Information System (INIS)

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-01-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  8. Pyrite: A blender plugin for visualizing molecular dynamics simulations using industry-standard rendering techniques.

    Science.gov (United States)

    Rajendiran, Nivedita; Durrant, Jacob D

    2018-05-05

    Molecular dynamics (MD) simulations provide critical insights into many biological mechanisms. Programs such as VMD, Chimera, and PyMOL can produce impressive simulation visualizations, but they lack many advanced rendering algorithms common in the film and video-game industries. In contrast, the modeling program Blender includes such algorithms but cannot import MD-simulation data. MD trajectories often require many gigabytes of memory/disk space, complicating Blender import. We present Pyrite, a Blender plugin that overcomes these limitations. Pyrite allows researchers to visualize MD simulations within Blender, with full access to Blender's cutting-edge rendering techniques. We expect Pyrite-generated images to appeal to students and non-specialists alike. A copy of the plugin is available at http://durrantlab.com/pyrite/, released under the terms of the GNU General Public License Version 3. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    Science.gov (United States)

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  10. Simple Coatings to Render Polystyrene Protein Resistant

    Directory of Open Access Journals (Sweden)

    Marcelle Hecker

    2018-02-01

    Full Text Available Non-specific protein adsorption is detrimental to the performance of many biomedical devices. Polystyrene is a commonly used material in devices and thin films. Simple reliable surface modification of polystyrene to render it protein resistant is desired in particular for device fabrication and orthogonal functionalisation schemes. This report details modifications carried out on a polystyrene surface to prevent protein adsorption. The trialed surfaces included Pluronic F127 and PLL-g-PEG, adsorbed on polystyrene, using a polydopamine-assisted approach. Quartz crystal microbalance with dissipation (QCM-D results showed only short-term anti-fouling success of the polystyrene surface modified with F127, and the subsequent failure of the polydopamine intermediary layer in improving its stability. In stark contrast, QCM-D analysis proved the success of the polydopamine assisted PLL-g-PEG coating in preventing bovine serum albumin adsorption. This modified surface is equally as protein-rejecting after 24 h in buffer, and thus a promising simple coating for long term protein rejection of polystyrene.

  11. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    Science.gov (United States)

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  12. Signal Processing Implementation and Comparison of Automotive Spatial Sound Rendering Strategies

    Directory of Open Access Journals (Sweden)

    Bai MingsianR

    2009-01-01

    Full Text Available Design and implementation strategies of spatial sound rendering are investigated in this paper for automotive scenarios. Six design methods are implemented for various rendering modes with different number of passengers. Specifically, the downmixing algorithms aimed at balancing the front and back reproductions are developed for the 5.1-channel input. Other five algorithms based on inverse filtering are implemented in two approaches. The first approach utilizes binaural (Head-Related Transfer Functions HRTFs measured in the car interior, whereas the second approach named the point-receiver model targets a point receiver positioned at the center of the passenger's head. The proposed processing algorithms were compared via objective and subjective experiments under various listening conditions. Test data were processed by the multivariate analysis of variance (MANOVA method and the least significant difference (Fisher's LSD method as a post hoc test to justify the statistical significance of the experimental data. The results indicate that inverse filtering algorithms are preferred for the single passenger mode. For the multipassenger mode, however, downmixing algorithms generally outperformed the other processing techniques.

  13. Innovative Lime Pozzolana Renders for Reconstruction of Historical Buildings

    International Nuclear Information System (INIS)

    Vejmelkova, E.; Maca, P.; Konvalinka, P.; Cerny, R.

    2011-01-01

    Bulk density, matrix density, open porosity, compressive strength, bending strength, water sorptivity, moisture diffusivity, water vapor diffusion coefficient, thermal conductivity, specific heat capacity and thermal diffusivity of two innovative renovation renders on limepozzolana basis are analyzed. The obtained results are compared with reference lime plaster and two commercial renovation renders, and conclusions on the applicability of the particular renders in practical reconstruction works are drawn. (author)

  14. Deep Learning Algorithm for Auto-Delineation of High-Risk Oropharyngeal Clinical Target Volumes With Built-In Dice Similarity Coefficient Parameter Optimization Function.

    Science.gov (United States)

    Cardenas, Carlos E; McCarroll, Rachel E; Court, Laurence E; Elgohari, Baher A; Elhalawani, Hesham; Fuller, Clifton D; Kamal, Mona J; Meheissen, Mohamed A M; Mohamed, Abdallah S R; Rao, Arvind; Williams, Bowman; Wong, Andrew; Yang, Jinzhong; Aristophanous, Michalis

    2018-06-01

    Automating and standardizing the contouring of clinical target volumes (CTVs) can reduce interphysician variability, which is one of the largest sources of uncertainty in head and neck radiation therapy. In addition to using uniform margin expansions to auto-delineate high-risk CTVs, very little work has been performed to provide patient- and disease-specific high-risk CTVs. The aim of the present study was to develop a deep neural network for the auto-delineation of high-risk CTVs. Fifty-two oropharyngeal cancer patients were selected for the present study. All patients were treated at The University of Texas MD Anderson Cancer Center from January 2006 to August 2010 and had previously contoured gross tumor volumes and CTVs. We developed a deep learning algorithm using deep auto-encoders to identify physician contouring patterns at our institution. These models use distance map information from surrounding anatomic structures and the gross tumor volume as input parameters and conduct voxel-based classification to identify voxels that are part of the high-risk CTV. In addition, we developed a novel probability threshold selection function, based on the Dice similarity coefficient (DSC), to improve the generalization of the predicted volumes. The DSC-based function is implemented during an inner cross-validation loop, and probability thresholds are selected a priori during model parameter optimization. We performed a volumetric comparison between the predicted and manually contoured volumes to assess our model. The predicted volumes had a median DSC value of 0.81 (range 0.62-0.90), median mean surface distance of 2.8 mm (range 1.6-5.5), and median 95th Hausdorff distance of 7.5 mm (range 4.7-17.9) when comparing our predicted high-risk CTVs with the physician manual contours. These predicted high-risk CTVs provided close agreement to the ground-truth compared with current interobserver variability. The predicted contours could be implemented clinically, with only

  15. An inverse analysis of a transient 2-D conduction-radiation problem using the lattice Boltzmann method and the finite volume method coupled with the genetic algorithm

    International Nuclear Information System (INIS)

    Das, Ranjan; Mishra, Subhash C.; Ajith, M.; Uppaluri, R.

    2008-01-01

    This article deals with the simultaneous estimation of parameters in a 2-D transient conduction-radiation heat transfer problem. The homogeneous medium is assumed to be absorbing, emitting and scattering. The boundaries of the enclosure are diffuse gray. Three parameters, viz. the scattering albedo, the conduction-radiation parameter and the boundary emissivity, are simultaneously estimated by the inverse method involving the lattice Boltzmann method (LBM) and the finite volume method (FVM) in conjunction with the genetic algorithm (GA). In the direct method, the FVM is used for computing the radiative information while the LBM is used to solve the energy equation. The temperature field obtained in the direct method is used in the inverse method for simultaneous estimation of unknown parameters using the LBM-FVM and the GA. The LBM-FVM-GA combination has been found to accurately predict the unknown parameters

  16. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  17. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  18. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.; Hijazi, Y.; Westerteiger, R.; Schott, M.; Hansen, C.; Hagen, H.

    2009-01-01

    classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches

  19. Method of producing hydrogen, and rendering a contaminated biomass inert

    Science.gov (United States)

    Bingham, Dennis N [Idaho Falls, ID; Klingler, Kerry M [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID

    2010-02-23

    A method for rendering a contaminated biomass inert includes providing a first composition, providing a second composition, reacting the first and second compositions together to form an alkaline hydroxide, providing a contaminated biomass feedstock and reacting the alkaline hydroxide with the contaminated biomass feedstock to render the contaminated biomass feedstock inert and further producing hydrogen gas, and a byproduct that includes the first composition.

  20. Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2005-01-01

    We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large

  1. Light Field Rendering for Head Mounted Displays using Pixel Reprojection

    DEFF Research Database (Denmark)

    Hansen, Anne Juhler; Klein, Jákup; Kraus, Martin

    2017-01-01

    Light field displays have advantages over traditional stereoscopic head mounted displays, for example, because they can overcome the vergence-accommodation conflict. However, rendering light fields can be a heavy task for computers due to the number of images that have to be rendered. Since much ...

  2. Media Presentation Synchronisation for Non-monolithic Rendering Architectures

    NARCIS (Netherlands)

    I. Vaishnavi (Ishan); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago); B. Gao (Bo)

    2007-01-01

    htmlabstractNon-monolithic renderers are physically distributed media playback engines. Non-monolithic renderers may use a number of different underlying network connection types to transmit media items belonging to a presentation. There is therefore a need for a media based and inter-network- type

  3. Volume definition system for treatment planning

    International Nuclear Information System (INIS)

    Alakuijala, Jyrki; Pekkarinen, Ari; Puurunen, Harri

    1997-01-01

    Purpose: Volume definition is a difficult and time consuming task in 3D treatment planning. We have studied a systems approach for constructing an efficient and reliable set of tools for volume definition. Our intent is to automate body outline, air cavities and bone volume definition and accelerate definition of other anatomical structures. An additional focus is on assisting in definition of CTV and PTV. The primary goals of this work are to cut down the time used in contouring and to improve the accuracy of volume definition. Methods: We used the following tool categories: manual, semi-automatic, automatic, structure management, target volume definition, and visualization tools. The manual tools include mouse contouring tools with contour editing possibilities and painting tools with a scaleable circular brush and an intelligent brush. The intelligent brush adapts its shape to CT value boundaries. The semi-automatic tools consist of edge point chaining, classical 3D region growing of single segment and competitive volume growing of multiple segments. We tuned the volume growing function to take into account both local and global region image values, local volume homogeneity, and distance. Heuristic seeding followed with competitive volume growing finds the body outline, couch and air automatically. The structure management tool stores ICD-O coded structures in a database. The codes have predefined volume growing parameters and thus are able to accommodate the volume growing dissimilarity function for different volume types. The target definition tools include elliptical 3D automargin for CTV to PTV transformation and target volume interpolation and extrapolation by distance transform. Both the CTV and the PTV can overlap with anatomical structures. Visualization tools show the volumes as contours or color wash overlaid on an image and displays voxel rendering or translucent triangle mesh rendering in 3D. Results: The competitive volume growing speeds up the

  4. Drishti: a volume exploration and presentation tool

    Science.gov (United States)

    Limaye, Ajay

    2012-10-01

    Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations.

  5. SU-C-BRA-05: Delineating High-Dose Clinical Target Volumes for Head and Neck Tumors Using Machine Learning Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, C [Department of Radiation Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); The University of Texas Graduate School of Biomedical Sciences, Houston, TX (United States); Wong, A [Department of Radiation Oncology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); School of Medicine, The University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States); Mohamed, A; Fuller, C [Department of Radiation Oncology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); Yang, J; Court, L; Aristophanous, M [Department of Radiation Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States); Rao, A [Department of Bioinformatics and Computational Biology, The University of Texas M.D. Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To develop and test population-based machine learning algorithms for delineating high-dose clinical target volumes (CTVs) in H&N tumors. Automating and standardizing the contouring of CTVs can reduce both physician contouring time and inter-physician variability, which is one of the largest sources of uncertainty in H&N radiotherapy. Methods: Twenty-five node-negative patients treated with definitive radiotherapy were selected (6 right base of tongue, 11 left and 9 right tonsil). All patients had GTV and CTVs manually contoured by an experienced radiation oncologist prior to treatment. This contouring process, which is driven by anatomical, pathological, and patient specific information, typically results in non-uniform margin expansions about the GTV. Therefore, we tested two methods to delineate high-dose CTV given a manually-contoured GTV: (1) regression-support vector machines(SVM) and (2) classification-SVM. These models were trained and tested on each patient group using leave-one-out cross-validation. The volume difference(VD) and Dice similarity coefficient(DSC) between the manual and auto-contoured CTV were calculated to evaluate the results. Distances from GTV-to-CTV were computed about each patient’s GTV and these distances, in addition to distances from GTV to surrounding anatomy in the expansion direction, were utilized in the regression-SVM method. The classification-SVM method used categorical voxel-information (GTV, selected anatomical structures, else) from a 3×3×3cm3 ROI centered about the voxel to classify voxels as CTV. Results: Volumes for the auto-contoured CTVs ranged from 17.1 to 149.1cc and 17.4 to 151.9cc; the average(range) VD between manual and auto-contoured CTV were 0.93 (0.48–1.59) and 1.16(0.48–1.97); while average(range) DSC values were 0.75(0.59–0.88) and 0.74(0.59–0.81) for the regression-SVM and classification-SVM methods, respectively. Conclusion: We developed two novel machine learning methods to delineate

  6. SU-C-BRA-05: Delineating High-Dose Clinical Target Volumes for Head and Neck Tumors Using Machine Learning Algorithms

    International Nuclear Information System (INIS)

    Cardenas, C; Wong, A; Mohamed, A; Fuller, C; Yang, J; Court, L; Aristophanous, M; Rao, A

    2016-01-01

    Purpose: To develop and test population-based machine learning algorithms for delineating high-dose clinical target volumes (CTVs) in H&N tumors. Automating and standardizing the contouring of CTVs can reduce both physician contouring time and inter-physician variability, which is one of the largest sources of uncertainty in H&N radiotherapy. Methods: Twenty-five node-negative patients treated with definitive radiotherapy were selected (6 right base of tongue, 11 left and 9 right tonsil). All patients had GTV and CTVs manually contoured by an experienced radiation oncologist prior to treatment. This contouring process, which is driven by anatomical, pathological, and patient specific information, typically results in non-uniform margin expansions about the GTV. Therefore, we tested two methods to delineate high-dose CTV given a manually-contoured GTV: (1) regression-support vector machines(SVM) and (2) classification-SVM. These models were trained and tested on each patient group using leave-one-out cross-validation. The volume difference(VD) and Dice similarity coefficient(DSC) between the manual and auto-contoured CTV were calculated to evaluate the results. Distances from GTV-to-CTV were computed about each patient’s GTV and these distances, in addition to distances from GTV to surrounding anatomy in the expansion direction, were utilized in the regression-SVM method. The classification-SVM method used categorical voxel-information (GTV, selected anatomical structures, else) from a 3×3×3cm3 ROI centered about the voxel to classify voxels as CTV. Results: Volumes for the auto-contoured CTVs ranged from 17.1 to 149.1cc and 17.4 to 151.9cc; the average(range) VD between manual and auto-contoured CTV were 0.93 (0.48–1.59) and 1.16(0.48–1.97); while average(range) DSC values were 0.75(0.59–0.88) and 0.74(0.59–0.81) for the regression-SVM and classification-SVM methods, respectively. Conclusion: We developed two novel machine learning methods to delineate

  7. Temporally rendered automatic cloud extraction (TRACE) system

    Science.gov (United States)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  8. Light Field Rendering for Head Mounted Displays using Pixel Reprojection

    DEFF Research Database (Denmark)

    Hansen, Anne Juhler; Klein, Jákup; Kraus, Martin

    2017-01-01

    of the information of the different images is redundant, we use pixel reprojection from the corner cameras to compute the remaining images in the light field. We compare the reprojected images with directly rendered images in a user test. In most cases, the users were unable to distinguish the images. In extreme...... cases, the reprojection approach is not capable of creating the light field. We conclude that pixel reprojection is a feasible method for rendering light fields as far as quality of perspective and diffuse shading is concerned, but render time needs to be reduced to make the method practical....

  9. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Ungun, B [Stanford University, Stanford, CA (United States); Stanford University School of Medicine, Stanford, CA (United States); Fu, A; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Boyd, S [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction, we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the

  10. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    International Nuclear Information System (INIS)

    Ungun, B; Fu, A; Xing, L; Boyd, S

    2016-01-01

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction, we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the

  11. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    Science.gov (United States)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic

  12. 3D virtual rendering in thoracoscopic treatment of congenital malformation of the lung

    Directory of Open Access Journals (Sweden)

    Destro F.

    2013-10-01

    Full Text Available Introduction: Congenital malformations of the lung (CML are rare but potentially dangerous congenital malformations. Their identification is important in order to define the most appropriate management. Materials and methods: We retrospectively reviewed data from 37 patients affected by CML treated in our Pediatric Surgery Unit in the last four years with minimally invasive surgery (MIS. Results: Prenatal diagnosis was possible in 26/37 patients. Surgery was performed in the first month of life in 3 symptomatic patients and between 6 and 12 months in the others. All patients underwent radiological evaluation prior to thoracoscopic surgery. Images collected were reconstructed using the VR render software. Discussion and conclusions: Volume rendering gives high anatomical resolution and it can be useful to guide the surgical procedure. Thoracoscopy should be the technique of choice because it is safe, effective and feasible. Furthermore it has the benefit of a minimal access technique and it can be easily performed in children.

  13. Factors affecting extension workers in their rendering of effective ...

    African Journals Online (AJOL)

    Factors affecting extension workers in their rendering of effective service to pre and ... Small, micro and medium entrepreneurs play an important role in economic ... such as production, marketing and management to adequately service the ...

  14. Experiencing "Macbeth": From Text Rendering to Multicultural Performance.

    Science.gov (United States)

    Reisin, Gail

    1993-01-01

    Shows how one teacher used innovative methods in teaching William Shakespeare's "Macbeth." Outlines student assignments including text renderings, rewriting a scene from the play, and creating a multicultural scrapbook for the play. (HB)

  15. Insurance of professional responsibility at medical aid rendering

    Directory of Open Access Journals (Sweden)

    Abyzova N.V.

    2011-12-01

    Full Text Available The article discusses the necessity of adoption of professional responsibility insurance act into the public health service. It is considered as the basic mechanism of compensation in case of damage to a patient at medical aid rendering

  16. Beaming teaching application: recording techniques for spatial xylophone sound rendering

    DEFF Research Database (Denmark)

    Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup

    2012-01-01

    BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...... to spatial improvements mainly in terms of the Apparent Source Width (ASW). Rendered examples are subjectively evaluated in listening tests by comparing them with binaural recording....

  17. Detection of Prion Proteins and TSE Infectivity in the Rendering and Biodiesel Manufacture Processes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R.; Keller, B.; Oleschuk, R. [Queen' s University, Kingston, Ontario (Canada)

    2007-03-15

    This paper addresses emerging issues related to monitoring prion proteins and TSE infectivity in the products and waste streams of rendering and biodiesel manufacture processes. Monitoring is critical to addressing the knowledge gaps identified in 'Biodiesel from Specified Risk Material Tallow: An Appraisal of TSE Risks and their Reduction' (IEA's AMF Annex XXX, 2006) that prevent comprehensive risk assessment of TSE infectivity in products and waste. The most important challenge for monitoring TSE risk is the wide variety of sample types, which are generated at different points in the rendering/biodiesel production continuum. Conventional transmissible spongiform encephalopathy (TSE) assays were developed for specified risk material (SRM) and other biological tissues. These, however, are insufficient to address the diverse sample matrices produced in rendering and biodiesel manufacture. This paper examines the sample types expected in rendering and biodiesel manufacture and the implications of applying TSE assay methods to them. The authors then discuss a sample preparation filtration, which has not yet been applied to these sample types, but which has the potential to provide or significantly improve TSE monitoring. The main improvement will come from transfer of the prion proteins from the sample matrix to a matrix compatible with conventional and emerging bioassays. A second improvement will come from preconcentrating the prion proteins, which means transferring proteins from a larger sample volume into a smaller volume for analysis to provide greater detection sensitivity. This filtration method may also be useful for monitoring other samples, including wash waters and other waste streams, which may contain SRM, including those from abattoirs and on-farm operations. Finally, there is a discussion of emerging mass spectrometric methods, which Prusiner and others have shown to be suitable for detection and characterisation of prion proteins (Stahl

  18. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 2; Geolocation, calibration, and ERBE-like analyses (subsystems 1-3)

    Science.gov (United States)

    Wielicki, B. A. (Principal Investigator); Barkstrom, B. R. (Principal Investigator); Charlock, T. P.; Baum, B. A.; Green, R. N.; Minnis, P.; Smith, G. L.; Coakley, J. A.; Randall, D. R.; Lee, R. B., III

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 2 details the techniques used to geolocate and calibrate the CERES scanning radiometer measurements of shortwave and longwave radiance to invert the radiances to top-of-the-atmosphere (TOA) and surface fluxes following the Earth Radiation Budget Experiment (ERBE) approach, and to average the fluxes over various time and spatial scales to produce an ERBE-like product. Spacecraft ephemeris and sensor telemetry are used with calibration coefficients to produce a chronologically ordered data product called bidirectional scan (BDS) radiances. A spatially organized instrument Earth scan product is developed for the cloud-processing subsystem. The ERBE-like inversion subsystem converts BDS radiances to unfiltered instantaneous TOA and surface fluxes. The TOA fluxes are determined by using established ERBE techniques. Hourly TOA fluxes are computed from the instantaneous values by using ERBE methods. Hourly surface fluxes are estimated from TOA fluxes by using simple parameterizations based on recent research. The averaging process produces daily, monthly-hourly, and monthly means of TOA and surface fluxes at various scales. This product provides a continuation of the ERBE record.

  19. RenderSelect: a Cloud Broker Framework for Cloud Renderfarm Services

    OpenAIRE

    Ruby, Annette J; Aisha, Banu W; Subash, Chandran P

    2016-01-01

    In the 3D studios the animation scene files undergo a process called as rendering, where the 3D wire frame models are converted into 3D photorealistic images. As the rendering process is both a computationally intensive and a time consuming task, the cloud services based rendering in cloud render farms is gaining popularity among the animators. Though cloud render farms offer many benefits, the animators hesitate to move from their traditional offline rendering to cloud services based render ...

  20. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  1. [Rendering surgical care to wounded with neck wounds in an armed conflict].

    Science.gov (United States)

    Samokhvalov, I M; Zavrazhnov, A A; Fakhrutdinov, A M; Sychev, M I

    2001-10-01

    The results of rendering of the medical care (the first aid, qualified and specialized) obtained in 172 servicemen with neck injuries who stayed in Republic of Chechnya during the period from 09.08.1999 to 28.07.2000 were analyzed. Basing on the results of analysis and experience of casualties' treatment the authors discuss the problems of sequence and volume of surgical care in this group of casualties with reference to available medical evacuation system, surgical tactics at the stage of specialized care. They also consider the peculiarities of operative treatment of the casualties with neck injuries.

  2. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  3. CT portography by multidetector helical CT. Comparison of three rendering models

    International Nuclear Information System (INIS)

    Nakayama, Yoshiharu; Imuta, Masanori; Funama, Yoshinori; Kadota, Masataka; Utsunomiya, Daisuke; Shiraishi, Shinya; Hayashida, Yoshiko; Yamashita, Yasuyuki

    2002-01-01

    The purpose of this study was to assess the value of multidetector CT portography in visualizing varices and portosystemic collaterals in comparison with conventional portography, and to compare the visualizations obtained by three rendering models (volume rendering, VR; minimum intensity projection, MIP; and shaded surface display, SSD). A total of 46 patients with portal hypertension were examined by CT and conventional portography for evaluation of portosystemic collaterals. CT portography was performed by multidetector CT (MD-CT) scanner with a slice thickness of 2.5 mm and table feed of 7.5 mm. Three types of CT portographic models were generated and compared with transarterial portography. Among 46 patients, 48 collaterals were identified on CT transverse images, while 38 collaterals were detected on transarterial portography. Forty-four of 48 collaterals identified on CT transverse images were visualized with the MIP model, while 34 and 29 collaterals were visualized by the VR and SSD methods, respectively. The average CT value for the portal vein and varices was 198 HU with data acquisition of 50 sec after contrast material injection. CT portography by multidetector CT provides excellent images in the visualization of portosystemic collaterals. The images of collaterals produced by MD-CT are superior to those of transarterial portography. Among the three rendering techniques, MIP provides the best visualization of portosystemic collaterals. (author)

  4. CT portography by multidetector helical CT. Comparison of three rendering models

    Energy Technology Data Exchange (ETDEWEB)

    Nakayama, Yoshiharu; Imuta, Masanori; Funama, Yoshinori; Kadota, Masataka; Utsunomiya, Daisuke; Shiraishi, Shinya; Hayashida, Yoshiko; Yamashita, Yasuyuki [Kumamoto Univ. (Japan). School of Medicine

    2002-12-01

    The purpose of this study was to assess the value of multidetector CT portography in visualizing varices and portosystemic collaterals in comparison with conventional portography, and to compare the visualizations obtained by three rendering models (volume rendering, VR; minimum intensity projection, MIP; and shaded surface display, SSD). A total of 46 patients with portal hypertension were examined by CT and conventional portography for evaluation of portosystemic collaterals. CT portography was performed by multidetector CT (MD-CT) scanner with a slice thickness of 2.5 mm and table feed of 7.5 mm. Three types of CT portographic models were generated and compared with transarterial portography. Among 46 patients, 48 collaterals were identified on CT transverse images, while 38 collaterals were detected on transarterial portography. Forty-four of 48 collaterals identified on CT transverse images were visualized with the MIP model, while 34 and 29 collaterals were visualized by the VR and SSD methods, respectively. The average CT value for the portal vein and varices was 198 HU with data acquisition of 50 sec after contrast material injection. CT portography by multidetector CT provides excellent images in the visualization of portosystemic collaterals. The images of collaterals produced by MD-CT are superior to those of transarterial portography. Among the three rendering techniques, MIP provides the best visualization of portosystemic collaterals. (author)

  5. Chromium Renderserver: Scalable and Open Source Remote RenderingInfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Brian; Ahern, Sean; Bethel, E. Wes; Brugger, Eric; Cook,Rich; Daniel, Jamison; Lewis, Ken; Owen, Jens; Southard, Dale

    2007-12-01

    Chromium Renderserver (CRRS) is software infrastructure thatprovides the ability for one or more users to run and view image outputfrom unmodified, interactive OpenGL and X11 applications on a remote,parallel computational platform equipped with graphics hardwareaccelerators via industry-standard Layer 7 network protocolsand clientviewers. The new contributions of this work include a solution to theproblem of synchronizing X11 and OpenGL command streams, remote deliveryof parallel hardware-accelerated rendering, and a performance analysis ofseveral different optimizations that are generally applicable to avariety of rendering architectures. CRRSis fully operational, Open Sourcesoftware.

  6. A contrast-oriented algorithm for FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer: derivation from phantom measurements and validation in patient data

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Andrea; Hellwig, Dirk; Kirsch, Carl-Martin; Nestle, Ursula [Saarland University Medical Center, Department of Nuclear Medicine, Homburg (Germany); Kremp, Stephanie; Ruebe, Christian [Saarland University Medical Center, Department of Radiotherapy, Homburg (Germany)

    2008-11-15

    An easily applicable algorithm for the FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer was developed by phantom measurements and validated in patient data. PET scans were performed (ECAT-ART tomograph) on two cylindrical phantoms (phan1, phan2) containing glass spheres of different volumes (7.4-258 ml) which were filled with identical FDG concentrations. Gradually increasing the activity of the fillable background, signal-to-background ratios from 33:1 to 2.5:1 were realised. The mean standardised uptake value (SUV) of the region-of-interest (ROI) surrounded by a 70% isocontour (mSUV{sub 70}) was used to represent the FDG accumulation of each sphere (or tumour). Image contrast was defined as: C=(mSUV{sub 70}-BG)/BG wehre BG is the mean background - SUV. For the spheres of phan1, the threshold SUVs (TS) best matching the known sphere volumes were determined. A regression function representing the relationship between TS/(mSUV{sub 70}-BG) and C was calculated and used for delineation of the spheres in phan2 and the gross tumour volumes (GTVs) of eight primary lung tumours. These GTVs were compared to those defined using CT. The relationship between TS/(mSUV{sub 70}-BG) and C is best described by an inverse regression function which can be converted to the linear relationship TS=a x mSUV{sub 70}+b x BG. Using this algorithm, the volumes delineated in phan2 differed by only -0.4 to +0.7 mm in radius from the true ones, whilst the PET-GTVs differed by only -0.7 to +1.2 mm compared with the values determined by CT. By the contrast-oriented algorithm presented in this study, a PET-based delineation of GTVs for primary tumours of lung cancer patients is feasible. (orig.)

  7. 7 CFR 54.1016 - Advance information concerning service rendered.

    Science.gov (United States)

    2010-01-01

    ... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... rendered. Upon request of any applicant, all or any part of the contents of any report issued to the...

  8. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  9. The effects of multiview depth video compression on multiview rendering

    NARCIS (Netherlands)

    Merkle, P.; Morvan, Y.; Smolic, A.; Farin, D.S.; Mueller, K.; With, de P.H.N.; Wiegang, T.

    2009-01-01

    This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient

  10. The Peshitta Rendering of Psalm 25: Spelling, Synonyms, and Syntax’

    NARCIS (Netherlands)

    Dyk, J.W.; Loopstra, J.; Sokoloff, M.

    2013-01-01

    The very act of making a translation implies that the rendered text will differ from the source text. The underlying presupposition is that the grammar, syntax, and semantics of the source and target languages are sufficiently divergent as to warrant a translation. Translations differ in how close

  11. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel

    2014-11-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  12. Remote parallel rendering for high-resolution tiled display walls

    KAUST Repository

    Nachbaur, Daniel; Dumusc, Raphael; Bilgili, Ahmet; Hernando, Juan; Eilemann, Stefan

    2014-01-01

    © 2014 IEEE. We present a complete, robust and simple to use hardware and software stack delivering remote parallel rendering of complex geometrical and volumetric models to high resolution tiled display walls in a production environment. We describe the setup and configuration, present preliminary benchmarks showing interactive framerates, and describe our contributions for a seamless integration of all the software components.

  13. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  14. FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    KAUST Repository

    Wan, Yong

    2012-02-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists\\' demands for qualitative analysis of confocal microscopy data. © 2012 IEEE.

  15. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  16. Water driven leaching of biocides from paints and renders

    DEFF Research Database (Denmark)

    Bester, Kai; Vollertsen, Jes; Bollmann, Ulla E

    ) were so high, that rather professional urban gardening (flower and greenhouses) than handling of biocides from construction materials seem to be able to explain the findings. While the use in agriculture is restricted, the use in greenhouses is currently considered legal in Denmark. Leaching....../partitioning: Considering material properties, it was found out that, for all of the compounds the sorption (and leaching) is highly pH-dependent. It must be take into account that the pH in the porewater of the tested render materials is between 9 and 10 while the rainwater is around 5, thus making prediction difficult...... at this stage. For some of the compounds the sorption is dependent on the amount of polymer in the render, while it is only rarely of importance which polymer is used. Considering the interaction of weather with the leaching of biocides from real walls it turned out that a lot of parameters such as irradiation...

  17. Tactile display for virtual 3D shape rendering

    CERN Document Server

    Mansutti, Alessandro; Bordegoni, Monica; Cugini, Umberto

    2017-01-01

    This book describes a novel system for the simultaneous visual and tactile rendering of product shapes which allows designers to simultaneously touch and see new product shapes during the conceptual phase of product development. This system offers important advantages, including potential cost and time savings, compared with the standard product design process in which digital 3D models and physical prototypes are often repeatedly modified until an optimal design is achieved. The system consists of a tactile display that is able to represent, within a real environment, the shape of a product. Designers can explore the rendered surface by touching curves lying on the product shape, selecting those curves that can be considered style features and evaluating their aesthetic quality. In order to physically represent these selected curves, a flexible surface is modeled by means of servo-actuated modules controlling a physical deforming strip. The tactile display is designed so as to be portable, low cost, modular,...

  18. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  19. Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters

    International Nuclear Information System (INIS)

    Humphreys, G.; Houston, M.; Ng, Y.-R.; Frank, R.; Ahern, S.; Kirchner, P.D.; Klosowski, J.T.

    2002-01-01

    We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments

  20. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  1. 3D cinematic rendering of the calvarium, maxillofacial structures, and skull base: preliminary observations.

    Science.gov (United States)

    Rowe, Steven P; Zinreich, S James; Fishman, Elliot K

    2018-06-01

    Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.

  2. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    Science.gov (United States)

    Grasnick, Armin

    2017-06-01

    technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  3. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  4. Rendering potential wearable robot designs with the LOPES gait trainer.

    Science.gov (United States)

    Koopman, B; van Asseldonk, E H F; van der Kooij, H; van Dijk, W; Ronsse, R

    2011-01-01

    In recent years, wearable robots (WRs) for rehabilitation, personal assistance, or human augmentation are gaining increasing interest. To make these devices more energy efficient, radical changes to the mechanical structure of the device are being considered. However, it remains very difficult to predict how people will respond to, and interact with, WRs that differ in terms of mechanical design. Users may adjust their gait pattern in response to the mechanical restrictions or properties of the device. The goal of this pilot study is to show the feasibility of rendering the mechanical properties of different potential WR designs using the robotic gait training device LOPES. This paper describes a new method that selectively cancels the dynamics of LOPES itself and adds the dynamics of the rendered WR using two parallel inverse models. Adaptive frequency oscillators were used to get estimates of the joint position, velocity, and acceleration. Using the inverse models, different WR designs can be evaluated, eliminating the need to build several prototypes. As a proof of principle, we simulated the effect of a very simple WR that consisted of a mass attached to the ankles. Preliminary results show that we are partially able to cancel the dynamics of LOPES. Additionally, the simulation of the mass showed an increase in muscle activity but not in the same level as during the control, where subjects actually carried the mass. In conclusion, the results in this paper suggest that LOPES can be used to render different WRs. In addition, it is very likely that the results can be further optimized when more effort is put in retrieving proper estimations for the velocity and acceleration, which are required for the inverse models. © 2011 IEEE

  5. Software System for Vocal Rendering of Printed Documents

    Directory of Open Access Journals (Sweden)

    Marian DARDALA

    2008-01-01

    Full Text Available The objective of this paper is to present a software system architecture developed to render the printed documents in a vocal form. On the other hand, in the paper are described the software solutions that exist as software components and are necessary for documents processing as well as for multimedia device controlling used by the system. The usefulness of this system is for people with visual disabilities that can access the contents of documents without that they be printed in Braille system or to exist in an audio form.

  6. Partitioning of biocides between water and inorganic phases of render

    DEFF Research Database (Denmark)

    Urbanczyk, Michal; Bollmann, Ulla E.; Bester, Kai

    The use of biocides as additives for building materials has gained importance in recent years. These biocides are, e.g., applied to renders and paints to prevent them from microbial spoilage. However, these biocides can leach out into the environment. In order to better understand this leaching...... compared. The partitioning constants for calcium carbonate varied between 0.1 (isoproturon) and 1.1 (iodocarb) and 84.6 (dichlorooctylisothiazolinone), respectively. The results for barite, kaolinite and mica were in a similar range and usually the compounds with high partitioning constants for one mineral...

  7. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  8. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  9. Radionuclide cisternography: SPECT and 3D-rendering

    International Nuclear Information System (INIS)

    Henkes, H.; Huber, G.; Piepgras, U.; Hierholzer, J.; Cordes, M.

    1991-01-01

    Radionuclide cisternography is indicated in the clinical work-up for hydrocephalus, when searching for CSF leaks, and when testing whether or not intracranial cystic lesions are communicating with the adjacent subarachnoid space. This paper demonstrates the feasibility and diagnostic value of SPECT and subsequent 3D surface rendering in addition to conventional rectilinear CSF imaging in eight patients. Planar images allowed the evaluation of CSF circulation and the detection of CSF fistula. They were advantageous in examinations 48 h after application of 111 In-DTPA. SPECT scans, generated 4-24 h after tracer application, were superior in the delineation of basal cisterns, especially in early scans; this was helpful in patients with pooling due to CSF fistula and in cystic lesions near the skull base. A major drawback was the limited image quality of delayed scans, when the SPECT data were degraded by a low count rate. 3D surface rendering was easily feasible from SPECT data and yielded high quality images. The presentation of the spatial distribution of nuclide-contaminated CSF proved especially helpful in the area of the basal cisterns. (orig.) [de

  10. Continuous Surface Rendering, Passing from CAD to Physical Representation

    Directory of Open Access Journals (Sweden)

    Mario Covarrubias

    2013-06-01

    Full Text Available This paper describes a desktop-mechatronic interface that has been conceived to support designers in the evaluation of aesthetic virtual shapes. This device allows a continuous and smooth free hand contact interaction on a real and developable plastic tape actuated by a servo-controlled mechanism. The objective in designing this device is to reproduce a virtual surface with a consistent physical rendering well adapted to designers' needs. The desktop-mechatronic interface consists in a servo-actuated plastic strip that has been devised and implemented using seven interpolation points. In fact, by using the MEC (Minimal Energy Curve Spline approach, a developable real surface is rendered taking into account the CAD geometry of the virtual shapes. In this paper, we describe the working principles of the interface by using both absolute and relative approaches to control the position on each single control point on the MEC spline. Then, we describe the methodology that has been implemented, passing from the CAD geometry, linked to VisualNastran in order to maintain the parametric properties of the virtual shape. Then, we present the co-simulation between VisualNastran and MATLAB/Simulink used for achieving this goal and controlling the system and finally, we present the results of the subsequent testing session specifically carried out to evaluate the accuracy and the effectiveness of the mechatronic device.

  11. Rendering of HDR content on LDR displays: an objective approach

    Science.gov (United States)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  12. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  13. Physically Based Rendering in the Nightshade NG Visualization Platform

    Science.gov (United States)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  14. A faster technique for rendering meshes in multiple display systems

    Science.gov (United States)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  15. Robust processing of intracranial CT angiograms for 3D volume rendering

    International Nuclear Information System (INIS)

    Moore, E.A.; Grieve, J.P.; Jaeger, H.R.; Univ. Dept. of Neurosurgery, London

    2001-01-01

    The goal of this study was to develop a robust and simple technique for processing of cranial CT angiograms (CTA) in the clinical setting. The method described in this paper involves segmentation of the bone, then dilation of the skull by adding three or four layers of voxels. This dilated skull is subtracted from the vessels object on a voxel-by-voxel basis, allowing segmentation and subsequent display of the vessels only. For evaluation of the technique, three groups of operators processed one CTA, and the quality of the 3D views obtained and the times taken were compared. One group was given training by an expert and a ''recipe'' for guidance, the second was given only the ''recipe,'' and the third group consisted of expert operators. All operators were able to produce good or acceptable shaded-surface displays when compared with digital subtraction angiography, within 10 min for experienced users, an average of 17 min for trained operators and 26 min for those using only the recipe sheet. Using a simple scoring system for the appearance of feeding vessels and draining veins, no significant differences were found between the three levels of training and experience. This technique simplifies the processing of CTAs and is quick enough to make such examinations part of a routine clinical service. (orig.)

  16. Iso-Surface Volume Rendering : speed and accuracy for medical applications

    NARCIS (Netherlands)

    Bosma, Marco

    2000-01-01

    This thesis describes the research on the accuracy and speed of different methods for the visualization of three-dimensional (3D)sets of (measured) data. In medical environments, these 3D datasets are generated by for instance CT and MRI scanners. The medical application makes special demands on the

  17. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    International Nuclear Information System (INIS)

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-01-01

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  18. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  19. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. Adaptive proxy map server for efficient vector spatial data rendering

    Science.gov (United States)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  1. Development of a virtual speaking simulator using Image Based Rendering.

    Science.gov (United States)

    Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I

    2002-01-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.

  2. Artist rendering of dust grains colliding at low speeds

    Science.gov (United States)

    2003-01-01

    Clues to the formation of planets and planetary rings -- like Saturn's dazzling ring system -- may be found by studying how dust grains interact as they collide at low speeds. To study the question of low-speed dust collisions, NASA sponsored the COLLisions Into Dust Experiment (COLLIDE) at the University of Colorado. It was designed to spring-launch marble-size projectiles into trays of powder similar to space or lunar dust. COLLIDE-1 (1998) discovered that collisions below a certain energy threshold eject no material. COLLIDE-2 was designed to identify where the threshold is. In COLLIDE-2, scientists nudged small projectiles into dust beds and recorded how the dust splashed outward (video frame at top; artist's rendering at bottom). The slowest impactor ejected no material and stuck in the target. The faster impactors produced ejecta; some rebounded while others stuck in the target.

  3. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    Science.gov (United States)

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  4. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    Science.gov (United States)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  5. 3D-shaded surface rendering of gadolinium-enhanced MR angiography in congenital heart disease

    International Nuclear Information System (INIS)

    Okuda, S.; Kikinis, R.; Dumanli, H.; Geva, T.; Powell, A.J.; Chung, T.

    2000-01-01

    Background. Gadolinium-enhanced three-dimensional (3D) MR angiography is a useful imaging technique for patients with congenital heart disease. Objective. This study sought to determine the added value of creating 3D shaded surface displays compared to standard maximal intensity projection (MIP) and multiplanar reformatting (MPR) techniques when analyzing 3D MR angiography data. Materials and methods. Seventeen patients (range, 3 months to 51 years old) with a variety of congenital cardiovascular defects underwent gadolinium-enhanced 3D MR angiography of the thorax. Color-coded 3D shaded surface models were rendered from the image data using manual segmentation and computer-based algorithms. Models could be rotated, translocated, or zoomed interactively by the viewer. Information available from the 3D models was compared to analysis based on viewing standard MIP/MPR displays. Results. Median postprocessing time for the 3D models was 6 h (range, 3-25 h) compared to approximately 20 min for MIP/MPR viewing. No additional diagnostic information was gained from 3D model analysis. All major findings with MIP/MPR postprocessing were also apparent on the 3D models. Qualitatively, the 3D models were more easily interpreted and enabled adjacent vessels to be distinguished more readily. Conclusion. Routine use of 3D shaded surface reconstructions for visualization of contrast enhanced MR angiography in congenital heart disease cannot be recommended. 3D surface rendering may be more useful for presenting complex anatomy to an audience unfamiliar with congenital heart disease and as an educational tool. (orig.)

  6. SeaWiFS technical report series. Volume 32: Level-3 SeaWiFS data products. Spatial and temporal binning algorithms

    Science.gov (United States)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Campbell, Janet W.; Blaisdell, John M.; Darzi, Michael

    1995-01-01

    The level-3 data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) are statistical data sets derived from level-2 data. Each data set will be based on a fixed global grid of equal-area bins that are approximately 9 x 9 sq km. Statistics available for each bin include the sum and sum of squares of the natural logarithm of derived level-2 geophysical variables where sums are accumulated over a binning period. Operationally, products with binning periods of 1 day, 8 days, 1 month, and 1 year will be produced and archived. From these accumulated values and for each bin, estimates of the mean, standard deviation, median, and mode may be derived for each geophysical variable. This report contains two major parts: the first (Section 2) is intended as a users' guide for level-3 SeaWiFS data products. It contains an overview of level-0 to level-3 data processing, a discussion of important statistical considerations when using level-3 data, and details of how to use the level-3 data. The second part (Section 3) presents a comparative statistical study of several binning algorithms based on CZCS and moored fluorometer data. The operational binning algorithms were selected based on the results of this study.

  7. Efficient Unbiased Rendering using Enlightened Local Path Sampling

    DEFF Research Database (Denmark)

    Kristensen, Anders Wang

    measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...

  8. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  9. Rendering Future Vegetation Change across Large Regions of the US

    Science.gov (United States)

    Sant'Anna Dias, Felipe; Gu, Yuting; Agarwalla, Yashika; Cheng, Yiwei; Patil, Sopan; Stieglitz, Marc; Turk, Greg

    2015-04-01

    We use two Machine Learning techniques, Decision Trees (DT) and Neural Networks (NN), to provide classified images and photorealistic renderings of future vegetation cover at three large regions in the US. The training data used to generate current vegetation cover include Landsat surface reflectance images, USGS Land Cover maps, 50 years of mean annual temperature and precipitation for the period 1950 - 2000, elevation, aspect and slope data. Present vegetation cover was generated on a 100m grid. Future vegetation cover for the period 2061- 2080 was predicted using the 1 km resolution bias corrected data from the NASA Goddard Institute for Space Studies Global Climate Model E simulation. The three test regions encompass a wide range of climatic gradients, topographic variation, and vegetation cover. The central Oregon site covers 19,182 square km and includes the Ochoco and Malheur National Forest. Vegetation cover is 50% evergreen forest and 50% shrubs and scrubland. The northwest Washington site covers 14,182 square km. Vegetation cover is 60% evergreen forest, 14% scrubs, 7% grassland, and 7% barren land. The remainder of the area includes deciduous forest, perennial snow cover, and wetlands. The third site, the Jemez mountain region of north central New Mexico, covers 5,500 square km. Vegetation cover is 47% evergreen forest, 31% shrubs, 13% grasses, and 3% deciduous forest. The remainder of the area includes developed and cultivated areas and wetlands. Using the above mentioned data sets we first trained our DT and NN models to reproduce current vegetation. The land cover classified images were compared directly to the USGS land cover data. The photorealistic generated vegetation images were compared directly to the remotely sensed surface reflectance maps. For all three sites, similarity between generated and observed vegetation cover was quite remarkable. The three trained models were then used to explore what the equilibrium vegetation would look like for

  10. Calibration of a dedicated software for 3D rendering

    International Nuclear Information System (INIS)

    Abrantes, Marcos E.S.; Felix, Warley F.; Veloso, Maria Auxiliadora F.; Universidade Federal de Minas Gerais

    2017-01-01

    With the increasing use of 3D reconstruction techniques, to assist in diagnosis, dedicated programs are being widely used. For this they must be calibrated in order to encounter the values of the real volumes of the human tissues. The purpose of this work is to indicate correction and calibration values for true volumes, read in a 3D reconstruction system dedicated, using DICOM images of Computed Tomography. This work utilized a PMMA thorax phantom associated with the DICOM image and the volume found by a program of a tomograph. The physical volume of the PMMA phantom found was 10359.0 cm³. For the volumes found according to the structures of interest, the values are 11005.5 cm³, 10249.3 cm³ and 10205.1 cm³ and the correction values are -6.2%, +1.1% e +1.5% respectively for tissues: pulmonary, bony and soft tissues. The procedure performed can be used for calibration in other 3D reconstruction programs, observing the necessary corrections and the methodology used. (author)

  11. Calibration of a dedicated software for 3D rendering

    Energy Technology Data Exchange (ETDEWEB)

    Abrantes, Marcos E.S.; Felix, Warley F.; Veloso, Maria Auxiliadora F., E-mail: marcos.nuclear@yahoo.com.br, E-mail: warleyferreirafelix@gmail.com, E-mail: mdora@nuclear.ufmg.br [Faculdade Ciencias Medicas de Minas Gerais (FCMMG), Belo Horizonte, MG (Brazil); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    With the increasing use of 3D reconstruction techniques, to assist in diagnosis, dedicated programs are being widely used. For this they must be calibrated in order to encounter the values of the real volumes of the human tissues. The purpose of this work is to indicate correction and calibration values for true volumes, read in a 3D reconstruction system dedicated, using DICOM images of Computed Tomography. This work utilized a PMMA thorax phantom associated with the DICOM image and the volume found by a program of a tomograph. The physical volume of the PMMA phantom found was 10359.0 cm³. For the volumes found according to the structures of interest, the values are 11005.5 cm³, 10249.3 cm³ and 10205.1 cm³ and the correction values are -6.2%, +1.1% e +1.5% respectively for tissues: pulmonary, bony and soft tissues. The procedure performed can be used for calibration in other 3D reconstruction programs, observing the necessary corrections and the methodology used. (author)

  12. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  13. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  14. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  15. Artist Material BRDF Database for Computer Graphics Rendering

    Science.gov (United States)

    Ashbaugh, Justin C.

    The primary goal of this thesis was to create a physical library of artist material samples. This collection provides necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set was produced consisting of 25 panels and containing nearly 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field. For each sample material, image data is collected and used to measure an average bi-directional reflectance distribution function (BRDF). The results are available as a public-domain image and optical database of artist materials at art-si.org. Additionally, the database includes specifications for each sample along with other information useful for computer graphics rendering such as the rectified sample images and normal maps.

  16. Brain ageing changes proteoglycan sulfation, rendering perineuronal nets more inhibitory.

    Science.gov (United States)

    Foscarin, Simona; Raha-Chowdhury, Ruma; Fawcett, James W; Kwok, Jessica C F

    2017-06-28

    Chondroitin sulfate (CS) proteoglycans in perineuronal nets (PNNs) from the central nervous system (CNS) are involved in the control of plasticity and memory. Removing PNNs reactivates plasticity and restores memory in models of Alzheimer's disease and ageing. Their actions depend on the glycosaminoglycan (GAG) chains of CS proteoglycans, which are mainly sulfated in the 4 (C4S) or 6 (C6S) positions. While C4S is inhibitory, C6S is more permissive to axon growth, regeneration and plasticity. C6S decreases during critical period closure. We asked whether there is a late change in CS-GAG sulfation associated with memory loss in aged rats. Immunohistochemistry revealed a progressive increase in C4S and decrease in C6S from 3 to 18 months. GAGs extracted from brain PNNs showed a large reduction in C6S at 12 and 18 months, increasing the C4S/C6S ratio. There was no significant change in mRNA levels of the chondroitin sulfotransferases. PNN GAGs were more inhibitory to axon growth than those from the diffuse extracellular matrix. The 18-month PNN GAGs were more inhibitory than 3-month PNN GAGs. We suggest that the change in PNN GAG sulfation in aged brains renders the PNNs more inhibitory, which lead to a decrease in plasticity and adversely affect memory.

  17. Moisture Transfer through Facades Covered with Organic Binder Renders

    Directory of Open Access Journals (Sweden)

    Carmen DICO

    2013-07-01

    Full Text Available Year after year we witness the negative effect of water over buildings, caused by direct or indirect actions. This situation is obvious in case of old, historical building, subjected to this aggression for a long period of time, but new buildings are also affected. Moisture in building materials causes not only structural damage, but also reduces the thermal insulation capacity of building components.Materials like plasters or paints have been used historically for a long period of time, fulfilling two basics functions: Decoration and Protection. The most acute demands are made on exterior plasters, as they, besides being an important decorative element for the facade, must perform two different functions simultaneously: protect the substrate against weathering and moisture without sealing, providing it a certain ability to “breathe” (Heilen, 2005. In order to accomplish this aim, the first step is to understand the hygrothermal behavior of coating and substrate and define the fundamental principles of moisture transfer; According to Künzel’s Facade Protection Theory, two material properties play the most important role: Water absorption and Vapor permeability.In the context of recently adoption (2009 of the “harmonized” European standard EN 15824 – „Specifications for external renders and internal plasters based on organic binders”, this paper deals extensively with the interaction of the two mentioned above properties for the coating materials, covered by EN 15824.

  18. Age, Health and Attractiveness Perception of Virtual (Rendered) Human Hair.

    Science.gov (United States)

    Fink, Bernhard; Hufschmidt, Carla; Hirn, Thomas; Will, Susanne; McKelvey, Graham; Lankhof, John

    2016-01-01

    The social significance of physical appearance and beauty has been documented in many studies. It is known that even subtle manipulations of facial morphology and skin condition can alter people's perception of a person's age, health and attractiveness. While the variation in facial morphology and skin condition cues has been studied quite extensively, comparably little is known on the effect of hair on social perception. This has been partly caused by the technical difficulty of creating appropriate stimuli for investigations of people's response to systematic variation of certain hair characteristics, such as color and style, while keeping other features constant. Here, we present a modeling approach to the investigation of human hair perception using computer-generated, virtual (rendered) human hair. In three experiments, we manipulated hair diameter (Experiment 1), hair density (Experiment 2), and hair style (Experiment 3) of human (female) head hair and studied perceptions of age, health and attractiveness. Our results show that even subtle changes in these features have an impact on hair perception. We discuss our findings with reference to previous studies on condition-dependent quality cues in women that influence human social perception, thereby suggesting that hair is a salient feature of human physical appearance, which contributes to the perception of beauty.

  19. Time varying, multivariate volume data reduction

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fout, Nathaniel [UC DAVIS; Ma, Kwan - Liu [UC DAVIS

    2010-01-01

    Large-scale supercomputing is revolutionizing the way science is conducted. A growing challenge, however, is understanding the massive quantities of data produced by large-scale simulations. The data, typically time-varying, multivariate, and volumetric, can occupy from hundreds of gigabytes to several terabytes of storage space. Transferring and processing volume data of such sizes is prohibitively expensive and resource intensive. Although it may not be possible to entirely alleviate these problems, data compression should be considered as part of a viable solution, especially when the primary means of data analysis is volume rendering. In this paper we present our study of multivariate compression, which exploits correlations among related variables, for volume rendering. Two configurations for multidimensional compression based on vector quantization are examined. We emphasize quality reconstruction and interactive rendering, which leads us to a solution using graphics hardware to perform on-the-fly decompression during rendering. In this paper we present a solution which addresses the need for data reduction in large supercomputing environments where data resulting from simulations occupies tremendous amounts of storage. Our solution employs a lossy encoding scheme to acrueve data reduction with several options in terms of rate-distortion behavior. We focus on encoding of multiple variables together, with optional compression in space and time. The compressed volumes can be rendered directly with commodity graphics cards at interactive frame rates and rendering quality similar to that of static volume renderers. Compression results using a multivariate time-varying data set indicate that encoding multiple variables results in acceptable performance in the case of spatial and temporal encoding as compared to independent compression of variables. The relative performance of spatial vs. temporal compression is data dependent, although temporal compression has the

  20. Volume Visualization and Compositing on Large-Scale Displays Using Handheld Touchscreen Interaction

    KAUST Repository

    Gastelum, Cristhopper Jacobo Armenta

    2011-07-27

    Advances in the physical sciences have progressively delivered ever increasing, already extremely large data sets to be analyzed. High performance volume rendering has become critical to the scientists for a better understanding of the massive amounts of data to be visualized. Cluster based rendering systems have become the base line to achieve the power and flexibility required to perform such task. Furthermore, display arrays have become the most suitable solution to display these data sets at their natural size and resolution which can be critical for human perception and evaluation. The work in this thesis aims at improving the scalability and usability of volume rendering systems that target visualization on display arrays. The first part deals with improving the performance by introducing the implementations of two parallel compositing algorithms for volume rendering: direct send and binary swap. The High quality Volume Rendering (HVR) framework has been extended to accommodate parallel compositing where previously only serial compositing was possible. The preliminary results show improvements in the compositing times for direct send even for a small number of processors. Unfortunately, the results of binary swap exhibit a negative behavior. This is due to the naive use of the graphics hardware blending mechanism. The expensive transfers account for the lengthy compositing times. The second part targets the development of scalable and intuitive interaction mechanisms. It introduces the development of a new client application for multitouch tablet devices, like the Apple iPad. The main goal is to provide the HVR framework, that has been extended to use tiled displays, a more intuitive and portable interaction mechanism that can get advantage of the new environment. The previous client is a PC application for the typical desktop settings that use a mouse and keyboard as sources of interaction. The current implementation of the client lets the user steer and

  1. A risk stratification algorithm using non-invasive respiratory volume monitoring to improve safety when using post-operative opioids in the PACU.

    Science.gov (United States)

    Voscopoulos, Christopher; Theos, Kimberly; Tillmann Hein, H A; George, Edward

    2017-04-01

    Late detection of respiratory depression in non-intubated patients compromises patient safety. SpO 2 is a lagging indicator of respiratory depression and EtCO 2 has proven to be unreliable in non-intubated patients. A decline in minute ventilation (MV) is the earliest sign of respiratory depression. A non-invasive respiratory volume monitor (RVM) that provides accurate, continuous MV measurements enables clinicians to predict and quantify respiratory compromise. For this observational study, practitioners were blinded to the RVM measurements and pain management followed the usual routine. Patients were stratified by their MV on PACU admission and classified as "At-Risk" or "Not-At-Risk," with progression to "Low MV" status following opioids assessed for each category. The purpose was to determine if stratifying based on MV on PACU arrival could identify patients at higher risk for respiratory depression. Ability to identify in advance patients at higher risk for respiratory depression following standard opioid dosing would drive changes in pain management and improve patient care. RVM and opioid administration data from 150 PACU patients following elective joint-replacement surgery were collected in an observational study. "Predicted" MV (MV PRED ) and "Percent Predicted" (MV MEASURED /MV PRED  × 100 %) were calculated for each patient using standard formulas. Prior to opioid administration, patients were classified as either "Not-At-Risk" (MV ≥ 80 % MV PRED ) or "At-Risk" (MV safety across the continuum of care.

  2. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  3. Digital tomosynthesis rendering of joint margins for arthritis assessment

    Science.gov (United States)

    Duryea, Jeffrey W.; Neumann, Gesa; Yoshioka, Hiroshi; Dobbins, James T., III

    2004-05-01

    PURPOSE: Rheumatoid arthritis (RA) of the hand is a significant healthcare problem. Techniques to accurately quantity the structural changes from RA are crucial for the development and prescription of therapies. Analysis of radiographic joint space width (JSW) is widely used and has demonstrated promise. However, radiography presents a 2D view of the joint. In this study we performed tomosynthesis reconstructions of proximal interphalangeal (PIP), and metacarpophalangeal (MCP) joints to measure the 3D joint structure. METHODS: We performed a reader study using simulated radiographs of 12 MCP and 12 PIP joints from skeletal specimens imaged with micro-CT. The tomosynthesis technique provided images of reconstructed planes with 0.75 mm spacing, which were presented to 2 readers with a computer tool. The readers were instructed to delineate the joint surfaces on tomosynthetic slices where they could visualize the margins. We performed a quantitative analysis of 5 slices surrounding the central portion of each joint. Reader-determined JSW was compared to a gold standard. As a figure of merit we calculated the average root-mean square deviation (RMSD). RESULTS: RMSD was 0.22 mm for both joints. For the individual joints, RMSD was 0.18 mm (MCP), and 0.26 mm (PIP). The reduced performance for the smaller PIP joints suggests that a slice spacing less than 0.75 mm may be more appropriate. CONCLUSIONS: We have demonstrated the capability of limited 3D rendering of joint surfaces using digital tomosynthesis. This technique promises to provide an improved method to visualize the structural changes of RA.

  4. Clinical Recommendations on Emergency Medical Care Rendering to Children with Acute Intoxication

    Directory of Open Access Journals (Sweden)

    A. A. Baranov

    2015-01-01

    Full Text Available The article is dedicated to the issue of intoxication in children. Acute accidental intoxication appears to be especially relevant for pediatric practice. Drugs, various chemicals frequently used in everyday life and in farming, as well as animal poisons, including snake poisons, may have a toxic effect on children. Specialists of professional associations of physicians “Russian Society of Emergency Medicine” and pediatricians “Union of Pediatricians of Russia” formulated and briefly described the main causes of acute intoxication in children, clinical manifestations and the most significant laboratory indicators of toxic manifestations for various substances, as well as therapy principles and algorithms for such conditions in compliance with principles of the evidence-based medicine. The article presents pathognomonic symptoms and peculiarities of drug intoxication, provides a description of mediator symptoms of intoxication with various substances, as well as the symptoms that may indicate toxic effect. The article contains a description of principles of correction of vital body functions, measures for removing toxic substances from the body and information on the main antidotes. Special attention is given to the most frequent types of intoxication (with organic acids, lye, naphazoline, paracetamol, snake poisons [viper bite]. The article lists stage of medical care rendering to children suffering from acute intoxication and presents prognosis and further management of pediatric patients suffering from such conditions. 

  5. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    Science.gov (United States)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  6. Automatic exposure control at single- and dual-heartbeat CTCA on a 320-MDCT volume scanner: effect of heart rate, exposure phase window setting, and reconstruction algorithm.

    Science.gov (United States)

    Funama, Yoshinori; Utsunomiya, Daisuke; Taguchi, Katsuyuki; Oda, Seitaro; Shimonobo, Toshiaki; Yamashita, Yasuyuki

    2014-05-01

    To investigate whether electrocardiogram (ECG)-gated single- and dual-heartbeat computed tomography coronary angiography (CTCA) with automatic exposure control (AEC) yields images with uniform image noise at reduced radiation doses. Using an anthropomorphic chest CT phantom we performed prospectively ECG-gated single- and dual-heartbeat CTCA on a second-generation 320-multidetector CT volume scanner. The exposure phase window was set at 75%, 70-80%, 40-80%, and 0-100% and the heart rate at 60 or 80 or corr80 bpm; images were reconstructed with filtered back projection (FBP) or iterative reconstruction (IR, adaptive iterative dose reduction 3D). We applied AEC and set the image noise level to 20 or 25 HU. For each technique we determined the image noise and the radiation dose to the phantom center. With half-scan reconstruction at 60 bpm, a 70-80% phase window- and a 20-HU standard deviation (SD) setting, the imagenoise level and -variation along the z axis manifested similar curves with FBP and IR. With half-scan reconstruction, the radiation dose to the phantom center with 70-80% phase window was 18.89 and 12.34 mGy for FBP and 4.61 and 3.10 mGy for IR at an SD setting SD of 20 and 25 HU, respectively. At 80 bpm with two-segment reconstruction the dose was approximately twice that of 60 bpm at both SD settings. However, increasing radiation dose at corr80 bpm was suppressed to 1.39 times compared to 60 bpm. AEC at ECG-gated single- and dual-heartbeat CTCA controls the image noise at different radiation dose. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. An adaptive occlusion culling algorithm for use in large ves

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2000-01-01

    The Hierarchical Occlusion Map algorithm is combined with Frustum Slicing to give a simpler occlusion-culling algorithm that more adequately caters to large, open VEs. The algorithm adapts to the level of visual congestion and is well suited for use with large, complex models with long mean free ...... line of sight ('the great outdoors'), models for which it is not feasible to construct, or search, a database of occluders to be rendered each frame....

  8. Influence of rendering methods on yield and quality of chicken fat recovered from broiler skin

    Directory of Open Access Journals (Sweden)

    Liang-Kun Lin

    2017-06-01

    Full Text Available Objective In order to utilize fat from broiler byproducts efficiently, it is necessary to develop an appropriate rendering procedure and establish quality information for the rendered fat. A study was therefore undertaken to evaluate the influence of rendering methods on the amounts and general properties of the fat recovered from broiler skin. Methods The yield and quality of the broiler skin fat rendered through high and lower energy microwave rendering (3.6 W/g for 10 min and 2.4 W/g for 10 min for high power microwave rendering (HPMR and high power microwave rendering (LPMR, respectively, oven baking (OB, at 180°C for 40 min, and water cooking (WC, boiling for 40 min were compared. Results Microwave-rendered skin exhibited the highest yields and fat recovery rates, followed by OB, and WC fats (p<0.05. HPMR fat had the highest L*, a*, and b* values, whereas WC fat had the highest moisture content, acid values, and thiobarbituric acid (TBA values (p<0.05. There was no significant difference in the acid value, peroxide value, and TBA values between HPMR and LPMR fats. Conclusion Microwave rendering at a power level of 3.6 W/g for 10 min is suggested base on the yield and quality of chicken fat.

  9. Particle-based non-photorealistic volume visualization

    NARCIS (Netherlands)

    Busking, S.; Vilanova, A.; Van Wijk, J.J.

    2007-01-01

    Non-photorealistic techniques are usually applied to produce stylistic renderings. In visualization, these techniques are often able to simplify data, producing clearer images than traditional visualization methods. We investigate the use of particle systems for visualizing volume datasets using

  10. Particle-based non-photorealistic volume visualization

    NARCIS (Netherlands)

    Busking, S.; Vilanova, A.; Wijk, van J.J.

    2008-01-01

    Non-photorealistic techniques are usually applied to produce stylistic renderings. In visualization, these techniques are often able to simplify data, producing clearer images than traditional visualization methods. We investigate the use of particle systems for visualizing volume datasets using

  11. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  12. High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery.

    Science.gov (United States)

    Chan, Sonny; Li, Peter; Locketz, Garrett; Salisbury, Kenneth; Blevins, Nikolas H

    2016-12-01

    Medical imaging techniques provide a wealth of information for surgical preparation, but it is still often the case that surgeons are examining three-dimensional pre-operative image data as a series of two-dimensional images. With recent advances in visual computing and interactive technologies, there is much opportunity to provide surgeons an ability to actively manipulate and interpret digital image data in a surgically meaningful way. This article describes the design and initial evaluation of a virtual surgical environment that supports patient-specific simulation of temporal bone surgery using pre-operative medical image data. Computational methods are presented that enable six degree-of-freedom haptic feedback during manipulation, and that simulate virtual dissection according to the mechanical principles of orthogonal cutting and abrasive wear. A highly efficient direct volume renderer simultaneously provides high-fidelity visual feedback during surgical manipulation of the virtual anatomy. The resulting virtual surgical environment was assessed by evaluating its ability to replicate findings in the operating room, using pre-operative imaging of the same patient. Correspondences between surgical exposure, anatomical features, and the locations of pathology were readily observed when comparing intra-operative video with the simulation, indicating the predictive ability of the virtual surgical environment.

  13. Positive Wigner functions render classical simulation of quantum computation efficient.

    Science.gov (United States)

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  14. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    Science.gov (United States)

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  15. 31 CFR 515.548 - Services rendered by Cuba to United States aircraft.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Services rendered by Cuba to United... REGULATIONS Licenses, Authorizations, and Statements of Licensing Policy § 515.548 Services rendered by Cuba to United States aircraft. Specific licenses are issued for payment to Cuba of charges for services...

  16. Method and system for rendering and interacting with an adaptable computing environment

    Science.gov (United States)

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  17. DNA rendering of polyhedral meshes at the nanoscale

    Science.gov (United States)

    Benson, Erik; Mohammed, Abdulmelik; Gardell, Johan; Masich, Sergej; Czeizler, Eugen; Orponen, Pekka; Högberg, Björn

    2015-07-01

    It was suggested more than thirty years ago that Watson-Crick base pairing might be used for the rational design of nanometre-scale structures from nucleic acids. Since then, and especially since the introduction of the origami technique, DNA nanotechnology has enabled increasingly more complex structures. But although general approaches for creating DNA origami polygonal meshes and design software are available, there are still important constraints arising from DNA geometry and sense/antisense pairing, necessitating some manual adjustment during the design process. Here we present a general method of folding arbitrary polygonal digital meshes in DNA that readily produces structures that would be very difficult to realize using previous approaches. The design process is highly automated, using a routeing algorithm based on graph theory and a relaxation simulation that traces scaffold strands through the target structures. Moreover, unlike conventional origami designs built from close-packed helices, our structures have a more open conformation with one helix per edge and are therefore stable under the ionic conditions usually used in biological assays.

  18. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  19. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna

    2015-05-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. \\'output-sensitive\\' algorithms and system designs. This leads to recent output-sensitive approaches that are \\'ray-guided\\', \\'visualization-driven\\' or \\'display-aware\\'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  20. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter

    2015-01-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  1. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    Science.gov (United States)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  2. COMPANION ANIMALS SYMPOSIUM: Rendered ingredients significantly influence sustainability, quality, and safety of pet food.

    Science.gov (United States)

    Meeker, D L; Meisinger, J L

    2015-03-01

    The rendering industry collects and safely processes approximately 25 million t of animal byproducts each year in the United States. Rendering plants process a variety of raw materials from food animal production, principally offal from slaughterhouses, but include whole animals that die on farms or in transit and other materials such as bone, feathers, and blood. By recycling these byproducts into various protein, fat, and mineral products, including meat and bone meal, hydrolyzed feather meal, blood meal, and various types of animal fats and greases, the sustainability of food animal production is greatly enhanced. The rendering industry is conscious of its role in the prevention of disease and microbiological control and providing safe feed ingredients for livestock, poultry, aquaculture, and pets. The processing of otherwise low-value OM from the livestock production and meat processing industries through rendering drastically reduces the amount of waste. If not rendered, biological materials would be deposited in landfills, burned, buried, or inappropriately dumped with large amounts of carbon dioxide, ammonia, and other compounds polluting air and water. The majority of rendered protein products are used as animal feed. Rendered products are especially valuable to the livestock and pet food industries because of their high protein content, digestible AA levels (especially lysine), mineral availability (especially calcium and phosphorous), and relatively low cost in relation to their nutrient value. The use of these reclaimed and recycled materials in pet food is a much more sustainable model than using human food for pets.

  3. Rendering Large-Scale Terrain Models and Positioning Objects in Relation to 3D Terrain

    National Research Council Canada - National Science Library

    Hittner, Brian

    2003-01-01

    .... Rendering large scale landscapes based on 3D geometry generally did not occur because the scenes generated tended to use up too much system memory and overburden 3D graphics cards with too many polygons...

  4. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The TerraBlocksTM 3D terrain data format and terrain-block-rendering methodology provides an enabling basis for successful commercial deployment of...

  5. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing; Zhang, Weigong; Fu, Xin

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPU for efficient 3D rendering.

  6. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics proposes an SBIR Phase I R/R&D effort to develop a key 3D terrain-rendering technology that provides the basis for successful commercial deployment...

  7. Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects

    Science.gov (United States)

    Beddiaf, Ali; Babahenini, Mohamed Chaouki

    2018-03-01

    Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.

  8. Major Deficiencies Preventing Auditors From Rendering Audit Opinions on DOD General Fund Financial Statements

    National Research Council Canada - National Science Library

    Rauu, Russell

    1995-01-01

    .... We plan to issue a similar report each year. The audit objective was to identify and summarize the major deficiencies that prevented auditors from rendering audit opinions, other than disclaimers, on Army and Air Force general fund financial...

  9. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  10. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  11. approche algorithme génétique

    African Journals Online (AJOL)

    Structure / acentric factor relationship of alcohols and phenols: genetic ... descriptors of geometrical type selected by genetic algorithm, among more than 1600 ..... Practical handbook of genetic algorithms: Applications Volume I; CRC Press.

  12. Specification and time required for the application of a lime-based render inside historic buildings

    Directory of Open Access Journals (Sweden)

    Vasco Peixoto de Freitas

    2008-01-01

    Full Text Available Intervention in ancient buildings with historical and architectural value requires traditional techniques, such as the use of lime mortars for internal and external wall renderings. In order to ensure the desired performance, these rendering mortars must be rigorously specified and quality controls have to be performed during application. The choice of mortar composition should take account of factors such as compatibility with the substrate, mechanical requirements and water behaviour. The construction schedule, which used to be considered a second order variable, nowadays plays a decisive role in the selection of the rendering technique, given its effects upon costs. How should lime-based mortars be specified? How much time is required for the application and curing of a lime-based render? This paper reflects upon the feasibility of using traditional lime mortars in three-layer renders inside churches and monasteries under adverse hygrothermal conditions and when time is critical. A case study is presented in which internal lime mortar renderings were applied in a church in Northern Portugal, where the very high relative humidity meant that several months were necessary before the drying process was complete.

  13. On-the-fly generation and rendering of infinite cities on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene\\'s derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  14. On-the-fly generation and rendering of infinite cities on the GPU

    KAUST Repository

    Steinberger, Markus; Kenzel, Michael; Kainz, Bernhard K.; Wonka, Peter; Schmalstieg, Dieter

    2014-01-01

    In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  15. North American Rendering: processing high quality protein and fats for feed North American Rendering: processamento de proteínas e gorduras de alta qualidade para alimentos para animais

    Directory of Open Access Journals (Sweden)

    David L. Meeker

    2009-07-01

    Full Text Available One third to one half of each animal produced for meat, milk, eggs, and fiber is not consumed by humans. These raw materials are subjected to rendering processes resulting in many useful products. Meat and bone meal, meat meal, poultry meal, hydrolyzed feather meal, blood meal, fish meal, and animal fats are the primary products resulting from the rendering process. The most important and valuable use for these animal by-products is as feed ingredients for livestock, poultry, aquaculture, and companion animals. There are volumes of scientific references validating the nutritional qualities of these products, and there are no scientific reasons for altering the practice of feeding rendered products to animals. Government agencies regulate the processing of food and feed, and the rendering industry is scrutinized often. In addition, industry programs include good manufacturing practices, HACCP, Codes of Practice, and third-party certification. The rendering industry clearly understands its role in the safe and nutritious production of animal feed ingredients and has done it very effectively for over 100 years. The availability of rendered products for animal feeds in the future depends on regulation and the market. Regulatory agencies will determine whether certain raw materials can be used for animal feed. The National Renderers Association (NRA supports the use of science as the basis for regulation while aesthetics, product specifications, and quality differences should be left to the market place. Without the rendering industry, the accumulation of unprocessed animal by-products would impede the meat industries and pose a serious potential hazard to animal and human health.De um terço a metade da produção animal para carne, leite, ovos e fibra, não são consumidos pelos seres humanos. Estes materiais não consumidos são sujeitos a processamento em graxarias e indústrias de alimentos de origem animal, resultando em uma série de produtos

  16. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  17. Antireflective sub-wavelength structures for improvement of the extraction efficiency and color rendering index of monolithic white light-emitting diode

    DEFF Research Database (Denmark)

    Ou, Yiyu; Corell, Dennis Dan; Dam-Hansen, Carsten

    2011-01-01

    We have theoretically investigated the influence of antireflective sub-wavelength structures on a monolithic white light-emitting diode (LED). The simulation is based on the rigorous coupled wave analysis (RCWA) algorithm, and both cylinder and moth-eye structures have been studied in the work. Our...... simulation results show that a moth-eye structure enhances the light extraction efficiency over the entire visible light range with an extraction efficiency enhancement of up to 26 %. Also for the first time to our best knowledge, the influence of sub-wavelength structures on both the color rendering index...

  18. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function.

    Science.gov (United States)

    Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-08-15

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.

  19. Real-Time Location-Based Rendering of Urban Underground Pipelines

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its objects onto the real world. Using AR, underground pipelines can be displayed accurately, intuitively, and in real time. We analyzed the characteristics of AR and their application in underground pipeline management. We mainly focused on the AR pipeline rendering procedure based on the BeiDou Navigation Satellite System (BDS and simultaneous localization and mapping (SLAM technology. First, in aiming to improve the spatial accuracy of pipeline rendering, we used differential corrections received from the Ground-Based Augmentation System to compute the precise coordinates of users in real time, which helped us accurately retrieve and draw pipelines near the users, and by scene recognition the accuracy can be further improved. Second, in terms of pipeline rendering, we used Visual-Inertial Odometry (VIO to track the rendered objects and made some improvements to visual effects, which can provide steady dynamic tracking of pipelines even in relatively markerless environments and outdoors. Finally, we used the occlusion method based on real-time 3D reconstruction to realistically express the immersion effect of underground pipelines. We compared our methods to the existing methods and concluded that the method proposed in this research improves the spatial accuracy of pipeline rendering and the portability of the equipment. Moreover, the updating of our rendering procedure corresponded with the moving of the user’s location, thus we achieved a dynamic rendering of pipelines in the real environment.

  20. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  1. The PHMC algorithm for simulations of dynamical fermions; 1, description and properties

    CERN Document Server

    Frezzotti, R

    1999-01-01

    We give a detailed description of the so-called Polynomial Hybrid Monte Carlo (PHMC) algorithm. The effects of the correction factor, which is introduced to render the algorithm exact, are discussed, stressing their relevance for the statistical fluctuations and (almost) zero mode contributions to physical observables. We also investigate rounding-error effects and propose several ways to reduce memory requirements.

  2. 6. Algorithms for Sorting and Searching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Algorithms - Algorithms for Sorting and Searching. R K Shyamasundar. Series Article ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  3. ClipCard: Sharable, Searchable Visual Metadata Summaries on the Cloud to Render Big Data Actionable

    Science.gov (United States)

    Saripalli, P.; Davis, D.; Cunningham, R.

    2013-12-01

    Research firm IDC estimates that approximately 90 percent of the Enterprise Big Data go un-analyzed, as 'dark data' - an enormous corpus of undiscovered, untagged information residing on data warehouses, servers and Storage Area Networks (SAN). In the geosciences, these data range from unpublished model runs to vast survey data assets to raw sensor data. Many of these are now being collected instantaneously, at a greater volume and in new data formats. Not all of these data can be analyzed, nor processed in real time, and their features may not be well described at the time of collection. These dark data are a serious data management problem for science organizations of all types, especially ones with mandated or required data reporting and compliance requirements. Additionally, data curators and scientists are encouraged to quantify the impact of their data holdings as a way to measure research success. Deriving actionable insights is the foremost goal of Big Data Analytics (BDA), which is especially true with geoscience, given its direct impact on most of the pressing global issues. Clearly, there is a pressing need for innovative approaches to making dark data discoverable, measurable, and actionable. We report on ClipCard, a Cloud-based SaaS analytic platform for instant summarization, quick search, visualization and easy sharing of metadata summaries form the Dark Data at hierarchical levels of detail, thus rendering it 'white', i.e., actionable. We present a use case of the ClipCard platform, a cloud-based application which helps generate (abstracted) visual metadata summaries and meta-analytics for environmental data at hierarchical scales within and across big data containers. These summaries and analyses provide important new tools for managing big data and simplifying collaboration through easy to deploy sharing APIs. The ClipCard application solves a growing data management bottleneck by helping enterprises and large organizations to summarize, search

  4. Evaluation and Improvement of the CIE Metameric and Colour Rendering Index

    Directory of Open Access Journals (Sweden)

    Radovan Slavuj

    2015-12-01

    Full Text Available All artificial light sources are intended to simulate daylight and its properties of color rendering or ability of colour discrimination. Two indices, defined by the CIE, are used to quantify quality of the artificial light sources. First is Color Rendering Index which quantifies ability of light sources to render colours and other is the Metemerism Index which describes metamerism potential of given light source. Calculation of both indices are defined by CIE and has been a subject of discussion and change in past. In this work particularly, the problem of sample number and type used in calculation is addressed here and evaluated. It is noticed that both indices depends on the choice and sample number and that they should be determined based on application.

  5. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    Science.gov (United States)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  6. Reflection curves—new computation and rendering techniques

    Directory of Open Access Journals (Sweden)

    Dan-Eugen Ulmet

    2004-05-01

    Full Text Available Reflection curves on surfaces are important tools for free-form surface interrogation. They are essential for industrial 3D CAD/CAM systems and for rendering purposes. In this note, new approaches regarding the computation and rendering of reflection curves on surfaces are introduced. These approaches are designed to take the advantage of the graphics libraries of recent releases of commercial systems such as the OpenInventor toolkit (developed by Silicon Graphics or Matlab (developed by The Math Works. A new relation between reflection curves and contour curves is derived; this theoretical result is used for a straightforward Matlab implementation of reflection curves. A new type of reflection curves is also generated using the OpenInventor texture and environment mapping implementations. This allows the computation, rendering, and animation of reflection curves at interactive rates, which makes it particularly useful for industrial applications.

  7. Cement-Based Renders Manufactured with Phase-Change Materials: Applications and Feasibility

    Directory of Open Access Journals (Sweden)

    Luigi Coppola

    2016-01-01

    Full Text Available The paper focuses on the evaluation of the rheological and mechanical performances of cement-based renders manufactured with phase-change materials (PCM in form of microencapsulated paraffin for innovative and ecofriendly residential buildings. Specifically, cement-based renders were manufactured by incorporating different amount of paraffin microcapsules—ranging from 5% to 20% by weight with respect to binder. Specific mass, entrained or entrapped air, and setting time were evaluated on fresh mortars. Compressive strength was measured over time to evaluate the effect of the PCM addition on the hydration kinetics of cement. Drying shrinkage was also evaluated. Experimental results confirmed that the compressive strength decreases as the amount of PCM increases. Furthermore, the higher the PCM content, the higher the drying shrinkage. The results confirm the possibility of manufacturing cement-based renders containing up to 20% by weight of PCM microcapsules with respect to binder.

  8. Local intelligent electronic device (IED) rendering templates over limited bandwidth communication link to manage remote IED

    Science.gov (United States)

    Bradetich, Ryan; Dearien, Jason A; Grussling, Barry Jakob; Remaley, Gavin

    2013-11-05

    The present disclosure provides systems and methods for remote device management. According to various embodiments, a local intelligent electronic device (IED) may be in communication with a remote IED via a limited bandwidth communication link, such as a serial link. The limited bandwidth communication link may not support traditional remote management interfaces. According to one embodiment, a local IED may present an operator with a management interface for a remote IED by rendering locally stored templates. The local IED may render the locally stored templates using sparse data obtained from the remote IED. According to various embodiments, the management interface may be a web client interface and/or an HTML interface. The bandwidth required to present a remote management interface may be significantly reduced by rendering locally stored templates rather than requesting an entire management interface from the remote IED. According to various embodiments, an IED may comprise an encryption transceiver.

  9. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  10. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  11. ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON

    Directory of Open Access Journals (Sweden)

    Liliana liliana

    2007-01-01

    Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.

  12. Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Science.gov (United States)

    Friston, Sebastian; Steed, Anthony; Tilbury, Simon; Gaydadjiev, Georgi

    2016-04-01

    Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.

  13. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  14. USER EVALUATION OF EIGHT LED LIGHT SOURCES WITH DIFFERENTSPECIAL COLOUR RENDERING INDICES R9

    DEFF Research Database (Denmark)

    Markvart, Jakob; Iversen, Anne; Logadóttir, Ásta

    2013-01-01

    In this study we evaluated the influence of the special colour rendering index R9 on subjective red colour perception and Caucasian skin appearance among untrained test subjects. The light sources tested are commercially available LED based light sources with similar correlated colour temperature...... and general colour rendering index, but with varying R9. It was found that the test subjects in general are more positive towards light sources with higher R9. The shift from a majority of negative responses to a majority of positive responses is found to occur at R9 values of ~20....

  15. Mastering Mental Ray Rendering Techniques for 3D and CAD Professionals

    CERN Document Server

    O'Connor, Jennifer

    2010-01-01

    Proven techniques for using mental ray effectively. If you're a busy artist seeking high-end results for your 3D, design, or architecture renders using mental ray, this is the perfect book for you. It distills the highly technical nature of rendering into easy-to-follow steps and tutorials that you can apply immediately to your own projects. The book uses 3ds Max and 3ds Max Design to show the integration with mental ray, but users of any 3D or CAD software can learn valuable techniques for incorporating mental ray into their pipelines.: Takes you under the hood of mental ray, a stand-alone or

  16. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin

    2012-01-01

    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  17. Parametric model of the scala tympani for haptic-rendered cochlear implantation.

    Science.gov (United States)

    Todd, Catherine; Naghdy, Fazel

    2005-01-01

    A parametric model of the human scala tympani has been designed for use in a haptic-rendered computer simulation of cochlear implant surgery. It will be the first surgical simulator of this kind. A geometric model of the Scala Tympani has been derived from measured data for this purpose. The model is compared with two existing descriptions of the cochlear spiral. A first approximation of the basilar membrane is also produced. The structures are imported into a force-rendering software application for system development.

  18. Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

    OpenAIRE

    Massa, Francisco; Russell, Bryan; Aubry, Mathieu

    2015-01-01

    This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefi...

  19. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  20. Leveraging Disturbance Observer Based Torque Control for Improved Impedance Rendering with Series Elastic Actuators

    Science.gov (United States)

    Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.

    2015-01-01

    The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.

  1. Photometric and Colorimeric Comparison of HDR and Spctrally Resolved Rendering Images

    DEFF Research Database (Denmark)

    Amdemeskel, Mekbib Wubishet; Soreze, Thierry Silvio Claude; Thorseth, Anders

    2017-01-01

    In this paper, we will demonstrate a comparison between measured colorimetric images, and simulated images from a physics based rendering engine. The colorimetric images are high dynamic range (HDR) and taken with a luminance and colour camera mounted on a goniometer. For the comparison, we have ...

  2. Advanced Audiovisual Rendering, Gesture-Based Interaction and Distributed Delivery for Immersive and Interactive Media Services

    NARCIS (Netherlands)

    Niamut, O.A.; Kochale, A.; Ruiz Hidalgo, J.; Macq, J-F.; Kienast, G.

    2011-01-01

    The media industry is currently being pulled in the often-opposing directions of increased realism (high resolution, stereoscopic, large screen) and personalisation (selection and control of content, availability on many devices). A capture, production, delivery and rendering system capable of

  3. An economic analysis of localized pollution: rendering emissions in a residential setting

    Science.gov (United States)

    J. Michael Bowker; H.F. MacDonald

    1991-01-01

    The contingent value method is employed to estimate economic damages to households resulting from rendering plant emissions in a small town. Household willingness to accept (WTA) and willingness to pay (WTP) are estimated individually and in aggregate. The influence of household characteristics on WTP and WTA is examined via regression models. The perception of health...

  4. 3D-TV Rendering on a Multiprocessor System on a Chip

    NARCIS (Netherlands)

    Van Eijndhoven, J.T.J.; Li, X.

    2006-01-01

    This thesis focuses on the issue of mapping 3D-TV rendering applications to a multiprocessor platform. The target platform aims to address tomorrow's multi-media consumer market. The prototype chip, called Wasabi, contains a set of TriMedia processors that communicate viaa shared memory, fast

  5. User evaluation of eight led light sources with different special colour rendering indices R9

    DEFF Research Database (Denmark)

    Markvart, Jakob; Iversen, Anne; Logadottir, Asta

    2013-01-01

    In this study we evaluated the influence of the special colour rendering index R9 on subjective red colour perception and Caucasian skin appearance among untrained test subjects. The light sources tested are commercially available LED based light sources with similar correlated colour temperature...

  6. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  7. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  8. Validation of Thermal Lethality against Salmonella enterica in Poultry Offal during Rendering.

    Science.gov (United States)

    Jones-Ibarra, Amie-Marie; Acuff, Gary R; Alvarado, Christine Z; Taylor, T Matthew

    2017-09-01

    Recent outbreaks of human disease following contact with companion animal foods cross-contaminated with enteric pathogens, such as Salmonella enterica, have resulted in increased concern regarding the microbiological safety of animal foods. Additionally, the U.S. Food and Drug Administration Food Safety Modernization Act and its implementing rules have stipulated the implementation of current good manufacturing practices and food safety preventive controls for livestock and companion animal foods. Animal foods and feeds are sometimes formulated to include thermally rendered animal by-product meals. The objective of this research was to determine the thermal inactivation of S. enterica in poultry offal during rendering at differing temperatures. Raw poultry offal was obtained from a commercial renderer and inoculated with a mixture of Salmonella serovars Senftenberg, Enteritidis, and Gallinarum (an avian pathogen) prior to being subjected to heating at 150, 155, or 160°F (65.5, 68.3, or 71.1°C) for up to 15 min. Following heat application, surviving Salmonella bacteria were enumerated. Mean D-values for the Salmonella cocktail at 150, 155, and 160°F were 0.254 ± 0.045, 0.172 ± 0.012, and 0.086 ± 0.004 min, respectively, indicative of increasing susceptibility to increased application of heat during processing. The mean thermal process constant (z-value) was 21.948 ± 3.87°F. Results indicate that a 7.0-log-cycle inactivation of Salmonella may be obtained from the cumulative lethality encountered during the heating come-up period and subsequent rendering of raw poultry offal at temperatures not less than 150°F. Current poultry rendering procedures are anticipated to be effective for achieving necessary pathogen control when completed under sanitary conditions.

  9. VIDEO ANIMASI 3D PENGENALAN RUMAH ADAT DAN ALAT MUSIK KEPRI DENGAN MENGUNAKAN TEKNIK RENDER CEL-SHADING

    OpenAIRE

    Jianfranco Irfian Asnawi; Afdhol Dzikri

    2016-01-01

    Animasi ini berjudul "video animasi 3D rumah adat dan alat musik Kepulauan Riau dengan menggunakan teknik render cel-shading" merupakan video yang bertujuan memperkenalkan alat-alat musik yang berasal dari kepulauan riau, Animasi ini akan diterapkan dengan menggunakan teknik render cel-shading. Cel-shading adalah teknik render yang menampilkan grafik 3D yang menyerupai gambar tangan, seperti gambar komik dan kartun. Teknik ini juga sudah di terapkan dalam game 3D yang ternyata menarik banyak ...

  10. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  11. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  12. A parallel coordinates style interface for exploratory volume visualization.

    Science.gov (United States)

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  13. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  14. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  15. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    Science.gov (United States)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  16. Custom OpenStreetMap Rendering – OpenTrackMap Experience

    Directory of Open Access Journals (Sweden)

    Radek Bartoň

    2010-02-01

    Full Text Available After 5 years of its existence, the OpenSteetMap [1] is becoming to be an important and valuable source of a geographic data for all people on the world. Although initially targeted to provide a map of cities for routing services, it can be exploited to other and often unexpected purposes. Such an utilization is an effort to map a network of hiking tracks of the Czech Tourist Club [2].  To support and apply this endeavour, the OpenTrackMap [3] project was started. Its aim is to primarily provide a customized rendering style for Mapnik renderer which emphasizes map features important to tourists and displays a layer with hiking tracks. This article presents obstacles which such project must face and it can be used as a tutorial for other projects of similar type.

  17. Rendering LGBTQ+ Visible in Nursing: Embodying the Philosophy of Caring Science.

    Science.gov (United States)

    Goldberg, Lisa; Rosenburg, Neal; Watson, Jean

    2017-06-01

    Although health care institutions continue to address the importance of diversity initiatives, the standard(s) for treatment remain historically and institutionally grounded in a sociocultural privileging of heterosexuality. As a result, lesbian, gay, bisexual, transgender, and queer (LGBTQ+) communities in health care remain largely invisible. This marked invisibility serves as a call to action, a renaissance of thinking within redefined boundaries and limitations. We must therefore refocus our habits of attention on the wholeness of persons and the diversity of their storied experiences as embodied through contemporary society. By rethinking current understandings of LGBTQ+ identities through innovative representation(s) of the media, music industry, and pop culture within a caring science philosophy, nurses have a transformative opportunity to render LGBTQ+ visible and in turn render a transformative opportunity for themselves.

  18. Moisture transport properties of brick – comparison of exposed, impregnated and rendered brick

    DEFF Research Database (Denmark)

    Hansen, Tessa Kvist; Bjarløv, Søren Peter; Peuhkuri, Ruut

    2016-01-01

    In regards to internal insulation of preservation worthy brick façades, external moisture sources, such as wind-driven rain exposure, inevitably has an impact on moisture conditions within the masonry construction. Surface treatments, such as hydrophobation or render, may remedy the impacts...... of external moisture. In the present paper the surface absorption of liquid water on masonry façades of untreated, hydrophobated and rendered brick, are determined experimentally and compared. The experimental work focuses on methods that can be applied on-site, Karsten tube measurements. These measurements...... are supplemented with results from laboratory measurements of water absorption coefficient by partial immersion. Based on obtained measurement results, simulations are made with external liquid water loads for determination of moisture conditions within the masonry of different surface treatments. Experimental...

  19. Waqf as a Tool for Rendering Social Welfare Services in the Social Entrepreneurship Context

    Directory of Open Access Journals (Sweden)

    Md. Mahmudul Alam

    2018-01-01

    Full Text Available The concept of Islamic entrepreneurship centers on ensuring community well-being as the priority, which is one of the important objectives (Maqasid of the Islamic Shari’ah. Historically, waqf played a significant role in the Islamic economic system, particularly in rendering exemplary welfare services in the areas of healthcare, education, social welfare, environmental, and other community-based programs. However, only a few success stories in recent history have institutionally utilized the properties of waqf under proper management to achieve its substantial objectives. This study uses the literature review as basis to analyze the reasons behind the successful utilization of waqf as an effective tool to ensure social welfare services in the past, as well as how this model can be replicated by considering current contexts. This study will assist Islamic valuecentric entrepreneurs, regulatory authorities, investors, and researchers to gain an overall insight into the potentials of waqf as a tool for rendering commendable social welfare services.

  20. Apparatus for rendering at least a portion of a device inoperable and related methods

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Michael A.; Steffler, Eric D.; Hartenstein, Steven D.; Wallace, Ronald S.

    2016-11-08

    Apparatus for rendering at least a portion of a device inoperable may include a containment structure having a first compartment that is configured to receive a device therein and a movable member configured to receive a cartridge having reactant material therein. The movable member is configured to be inserted into the first compartment of the containment structure and to ignite the reactant material within the cartridge. Methods of rendering at least a portion of a device inoperable may include disposing the device into the first compartment of the containment structure, inserting the movable member into the first compartment of the containment structure, igniting the reactant material in the cartridge, and expelling molten metal onto the device.

  1. Subsurface Scattering-Based Object Rendering Techniques for Real-Time Smartphone Games

    Directory of Open Access Journals (Sweden)

    Won-Sun Lee

    2014-01-01

    Full Text Available Subsurface scattering that simulates the path of a light through the material in a scene is one of the advanced rendering techniques in the field of computer graphics society. Since it takes a number of long operations, it cannot be easily implemented in real-time smartphone games. In this paper, we propose a subsurface scattering-based object rendering technique that is optimized for smartphone games. We employ our subsurface scattering method that is utilized for a real-time smartphone game. And an example game is designed to validate how the proposed method can be operated seamlessly in real time. Finally, we show the comparison results between bidirectional reflectance distribution function, bidirectional scattering distribution function, and our proposed subsurface scattering method on a smartphone game.

  2. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  3. A Practical Framework for Sharing and Rendering Real-World Bidirectional Scattering Distribution Functions

    Energy Technology Data Exchange (ETDEWEB)

    Ward, Greg [Anywhere Software, Albany, CA (United States); Kurt, Murat [International Computer Institute, Ege University (Turkey); Bonneel, Nicolas [Harvard Univ., Cambridge, MA (United States)

    2012-09-30

    The utilization of real-world materials has been hindered by a lack of standards for sharing and interpreting measured data. This paper presents an XML representation and an Open Source C library to support bidirectional scattering distribution functions (BSDFs) in data-driven lighting simulation and rendering applications.The library provides for the efficient representation, query, and Monte Carlo sampling of arbitrary BSDFs in amodel-free framework. Currently, we support two BSDF data representations: one using a fixed subdivision of thehemisphere, and one with adaptive density. The fixed type has advantages for certain matrix operations, while theadaptive type can more accurately represent highly peaked data. We discuss advanced methods for data-drivenBSDF rendering for both types, including the proxy of detailed geometry to enhance appearance and accuracy.We also present an advanced interpolation method to reduce measured data into these standard representations.We end with our plan for future extensions and sharing of BSDF data.

  4. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    Science.gov (United States)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  5. An image-based approach to the rendering of crowds in real-time

    OpenAIRE

    Tecchia, Franco

    2007-01-01

    The wide use of computer graphics in games, entertainment, medical, architectural and cultural applications, has led it to becoming a prevalent area of research. Games and entertainment in general have become one of the driving forces of the real-time computer graphics industry, bringing reasonably realistic, complex and appealing virtual worlds to the mass-market. At the current stage of technology, an user can interactively navigate through complex, polygon-based scenes rendered with sophis...

  6. Unconscious neural processing differs with method used to render stimuli invisible

    Directory of Open Access Journals (Sweden)

    Sergey Victor Fogelson

    2014-06-01

    Full Text Available Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS or chromatic flicker fusion (CFF. In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

  7. Unconscious neural processing differs with method used to render stimuli invisible.

    Science.gov (United States)

    Fogelson, Sergey V; Kohler, Peter J; Miller, Kevin J; Granger, Richard; Tse, Peter U

    2014-01-01

    Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

  8. One-Dimensional Haptic Rendering Using Audio Speaker with Displacement Determined by Inductance

    Directory of Open Access Journals (Sweden)

    Avin Khera

    2016-03-01

    Full Text Available We report overall design considerations and preliminary results for a new haptic rendering device based on an audio loudspeaker. Our application models tissue properties during microsurgery. For example, the device could respond to the tip of a tool by simulating a particular tissue, displaying a desired compressibility and viscosity, giving way as the tissue is disrupted, or exhibiting independent motion, such as that caused by pulsations in blood pressure. Although limited to one degree of freedom and with a relatively small range of displacement compared to other available haptic rendering devices, our design exhibits high bandwidth, low friction, low hysteresis, and low mass. These features are consistent with modeling interactions with delicate tissues during microsurgery. In addition, our haptic rendering device is designed to be simple and inexpensive to manufacture, in part through an innovative method of measuring displacement by existing variations in the speaker’s inductance as the voice coil moves over the permanent magnet. Low latency and jitter are achieved by running the real-time simulation models on a dedicated microprocessor, while maintaining bidirectional communication with a standard laptop computer for user controls and data logging.

  9. A Single Swede Midge (Diptera: Cecidomyiidae) Larva Can Render Cauliflower Unmarketable.

    Science.gov (United States)

    Stratton, Chase A; Hodgdon, Elisabeth A; Zuckerman, Samuel G; Shelton, Anthony M; Chen, Yolanda H

    2018-05-01

    Swede midge, Contarinia nasturtii Kieffer (Diptera: Cecidomyiidae), is an invasive pest causing significant damage on Brassica crops in the Northeastern United States and Eastern Canada. Heading brassicas, like cauliflower, appear to be particularly susceptible. Swede midge is difficult to control because larvae feed concealed inside meristematic tissues of the plant. In order to develop damage and marketability thresholds necessary for integrated pest management, it is important to determine how many larvae render plants unmarketable and whether the timing of infestation affects the severity of damage. We manipulated larval density (0, 1, 3, 5, 10, or 20) per plant and the timing of infestation (30, 55, and 80 d after seeding) on cauliflower in the lab and field to answer the following questions: 1) What is the swede midge damage threshold? 2) How many swede midge larvae can render cauliflower crowns unmarketable? and 3) Does the age of cauliflower at infestation influence the severity of damage and marketability? We found that even a single larva can cause mild twisting and scarring in the crown rendering cauliflower unmarketable 52% of the time, with more larvae causing more severe damage and additional losses, regardless of cauliflower age at infestation.

  10. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  11. The lime renderings from plaza de la Corredera, Córdoba

    Directory of Open Access Journals (Sweden)

    González, T.

    2002-09-01

    Full Text Available The causes of the pathologies found on the lime renderings from Plaza de la Corredera façades are analysed in this study. For this purpose, the mineralogical and chemical analyses of the building materials -brickwork and rendering mortar- has been carried out, as well as their physical, hydric and mechanical properties have been determined. The obtained results from both unaltered and altered materials, and the analysis of the rendering's raw materials, have allowed us to establish that rendering deterioration is connected to the presence of saline compounds (gypsum, halite, which existing in the brickwork substratum, have been removed due to the water saturation of such brickwork. The main cause responsible of the alteration forms - efflorescence, crusts, grain-disintegration, bulging, flaking found on the renderings, has been the salts precipitation (halite, hexahydrite, epsomite in their way towards the external surface.

    En este estudio se analizan las causas de las patologías de los revocos de cal de las fachadas de la Plaza de la Corredera. Para ello se ha realizado el análisis mineralógico y químico de los materiales de construcción - fábrica de ladrillo y mortero de revestimiento- y se han determinado sus propiedades físicas, hídricas y mecánicas. Mediante la comparación de los resultados obtenidos en los materiales inalterados y en los alterados, así como una vez analizadas las materias primas utilizadas en la realización del revoco, se ha podido establecer que la alteración de este último está relacionada con la existencia de compuestos salinos (yeso, halita que, presentes en el substrato de fábrica de ladrillo, se han exudado por saturación de agua de la misma. La precipitación de las sales (halita, hexahidrita, epsomita en su migración hacia el exterior ha sido la principal responsable de las formas de alteración -eflorescencias, costras, arenización, abombamientos, descamaciones- que aparecen sobre los

  12. Modern algorithms for large sparse eigenvalue problems

    International Nuclear Information System (INIS)

    Meyer, A.

    1987-01-01

    The volume is written for mathematicians interested in (numerical) linear algebra and in the solution of large sparse eigenvalue problems, as well as for specialists in engineering, who use the considered algorithms in the investigation of eigenoscillations of structures, in reactor physics, etc. Some variants of the algorithms based on the idea of a gradient-type direction of movement are presented and their convergence properties are discussed. From this, a general strategy for the direct use of preconditionings for the eigenvalue problem is derived. In this new approach the necessity of the solution of large linear systems is entirely avoided. Hence, these methods represent a new alternative to some other modern eigenvalue algorithms, as they show a slightly slower convergence on the one hand but essentially lower numerical and data processing problems on the other hand. A brief description and comparison of some well-known methods (i.e. simultaneous iteration, Lanczos algorithm) completes this volume. (author)

  13. Predicting the long-term durability of hemp-lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations.

    Science.gov (United States)

    Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe

    2016-01-15

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp-lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp-lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp-lime mixes. This study has demonstrated that hemp-lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  15. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  16. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  17. The algorithms and principles of non-photorealistic graphics

    CERN Document Server

    Geng, Weidong

    2011-01-01

    ""The Algorithms and Principles of Non-photorealistic Graphics: Artistic Rendering and Cartoon Animation"" provides a conceptual framework for and comprehensive and up-to-date coverage of research on non-photorealistic computer graphics including methodologies, algorithms and software tools dedicated to generating artistic and meaningful images and animations. This book mainly discusses how to create art from a blank canvas, how to convert the source images into pictures with the desired visual effects, how to generate artistic renditions from 3D models, how to synthesize expressive pictures f

  18. VIDEO ANIMASI 3D PENGENALAN RUMAH ADAT DAN ALAT MUSIK KEPRI DENGAN MENGUNAKAN TEKNIK RENDER CEL-SHADING

    Directory of Open Access Journals (Sweden)

    Jianfranco Irfian Asnawi

    2016-11-01

    Full Text Available Animasi ini berjudul "video animasi 3D rumah adat dan alat musik Kepulauan Riau dengan menggunakan teknik render cel-shading" merupakan video yang bertujuan memperkenalkan alat-alat musik yang berasal dari kepulauan riau, Animasi ini akan diterapkan dengan menggunakan teknik render cel-shading. Cel-shading adalah teknik render yang menampilkan grafik 3D yang menyerupai gambar tangan, seperti gambar komik dan kartun. Teknik ini juga sudah di terapkan dalam game 3D yang ternyata menarik banyak perhatian peminat. Teknik ini akan di terapkan kedalam animasi 3D "video animasi rumah adat dan alat musik kepulauan riau dengan menggunakan teknik render cel-shading" Animasi di rancang menggunakan skenario dan storyboard kemudian di implementasikan dalam software 3D MAYA AUTODESK dengan menggunakan teknik render cel-shading. Setelah diterapkan maka di dapatkan definisi keberhasilan dari teknik render cel shading di bandingkan dengan teknik render global illumination seperti dari kecepatan dalam merender dan tingkat kecerahan warna pada video. Kata kunci: animasi, game 3D, cel-shading.

  19. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  20. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    Science.gov (United States)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can

  1. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    International Nuclear Information System (INIS)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao

    2013-01-01

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  2. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Zhi-Jun [Dept. of Radiology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)], e-mail: Gzj3@163.com; Lin, Qiang [Dept. of Oncology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China); Liu, Hai-Tao [Dept. of General Surgery, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)] [and others])

    2013-09-15

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  3. A Case Study of a Hybrid Parallel 3D Surface Rendering Graphics Architecture

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik; Madsen, Jan; Pedersen, Steen

    1997-01-01

    This paper presents a case study in the design strategy used inbuilding a graphics computer, for drawing very complex 3Dgeometric surfaces. The goal is to build a PC based computer systemcapable of handling surfaces built from about 2 million triangles, andto be able to render a perspective view...... of these on a computer displayat interactive frame rates, i.e. processing around 50 milliontriangles per second. The paper presents a hardware/softwarearchitecture called HPGA (Hybrid Parallel Graphics Architecture) whichis likely to be able to carry out this task. The case study focuses ontechniques to increase...

  4. Three-dimensional rendering of otolith growth using phase contrast synchrotron tomography.

    Science.gov (United States)

    Mapp, J J I; Fisher, M H; Atwood, R C; Bell, G D; Greco, M K; Songer, S; Hunter, E

    2016-05-01

    A three-dimensional computer reconstruction of a plaice Pleuronectes platessa otolith is presented from data acquired by the Diamond Light synchrotron, beamline I12, X-ray source, a high energy (53-150 keV) source particularly well suited to the study of dense objects. The data allowed non-destructive rendering of otolith structure, and for the first time allows otolith annuli (internal ring structures) to be analysed in X-ray tomographic images. © 2016 The Fisheries Society of the British Isles.

  5. Experimental and rendering-based investigation of laser radar cross sections of small unmanned aerial vehicles

    Science.gov (United States)

    Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank

    2017-12-01

    Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.

  6. Peeling tests for assessing the cohesion and consolidation characteristics of mortar and render surfaces

    Czech Academy of Sciences Publication Activity Database

    Drdácký, Miloš; Lesák, Jaroslav; Niedoba, Krzysztof; Valach, Jaroslav

    2015-01-01

    Roč. 48, č. 6 (2015), s. 1947-1963 ISSN 1359-5997 R&D Projects: GA ČR(CZ) GBP105/12/G059; GA MŠk(CZ) ED1.1.00/02.0060 Institutional support: RVO:68378297 Keywords : peeling test * rendered surface * surface consolidation * cohesion * non-destructive testing Subject RIV: AL - Art, Architecture, Cultural Heritage Impact factor: 2.453, year: 2015 http://link.springer.com/article/10.1617/s11527-014-0285-8

  7. Semiconductive 3-D haloplumbate framework hybrids with high color rendering index white-light emission.

    Science.gov (United States)

    Wang, Guan-E; Xu, Gang; Wang, Ming-Sheng; Cai, Li-Zhen; Li, Wen-Hua; Guo, Guo-Cong

    2015-12-01

    Single-component white light materials may create great opportunities for novel conventional lighting applications and display systems; however, their reported color rendering index (CRI) values, one of the key parameters for lighting, are less than 90, which does not satisfy the demand of color-critical upmarket applications, such as photography, cinematography, and art galleries. In this work, two semiconductive chloroplumbate (chloride anion of lead(ii)) hybrids, obtained using a new inorganic-organic hybrid strategy, show unprecedented 3-D inorganic framework structures and white-light-emitting properties with high CRI values around 90, one of which shows the highest value to date.

  8. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  9. Predicting the long-term durability of hemp–lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations

    International Nuclear Information System (INIS)

    Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe

    2016-01-01

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp–lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp–lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp–lime mixes. This study has demonstrated that hemp–lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. - Highlights: • Realistic simulations in the cabinet of one-year exposure to environmental conditions • Influence of the lime type on the durability of hemp–lime renders

  10. Predicting the long-term durability of hemp–lime renders in inland and coastal areas using Mediterranean, Tropical and Semi-arid climatic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Arizzi, Anna, E-mail: anna.arizzi@ouce.ox.ac.uk [School of Geography and the Environment, University of Oxford, Dyson Perrins Building, South Parks Road, Oxford OX1 3QY (United Kingdom); Viles, Heather [School of Geography and the Environment, University of Oxford, Dyson Perrins Building, South Parks Road, Oxford OX1 3QY (United Kingdom); Martín-Sanchez, Inés [Departamento de Microbiología, Universidad de Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Cultrone, Giuseppe [Departamento de Mineralogía y Petrología, Universidad de Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain)

    2016-01-15

    Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp–lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp–lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp–lime mixes. This study has demonstrated that hemp–lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. - Highlights: • Realistic simulations in the cabinet of one-year exposure to environmental conditions • Influence of the lime type on the durability of hemp–lime renders

  11. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Kennedy, A.D.

    1989-01-01

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

  12. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  13. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  14. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  15. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  16. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  17. 基于矩形分割的局部渲染技术在无线图像通信中的应用%Application of Rectangle Incision Based on Part Render Technology in Wireless Picture Communication

    Institute of Scientific and Technical Information of China (English)

    刘德胜

    2012-01-01

    为了提升无线图像通信中的图像渲染性能和减小数据的传输量,提出了一种优化算法,将基于矩形分割的局部渲染技术引入到无线图像通信中,以此减小每一帧的渲染区域和传输数据,以达到在节约CPU计算资源的同时,降低电量消耗和带宽依赖的目的.通过实验发现,该算法在图像相对变化较小的时候,最多能将渲染性能提升一倍,同时传输数据量和需要重新渲染的单个对象数量基本呈正比.实验结果证明,该算法有其适用范围,当图像较稳定的时候,平均能够提升30%以上计算性能,并减少50%以上的数据传输.%For optimizing the performance of picture render and reduce the size of transfer data, this paper proposed an algorithm which can cut down the render area and the size of data to be transferred by part rendering. It is expected that this algoritms can save computational resources and reduce consumption of electricity and bandwidth at the same time. According to the experiment, this algorithm can optimize the render performance up to 100 percent when picture is not changing too much and the size of transferred data is directly proportional to the number of objects need to be re-rendered. This result proved that when the picture is relatively stable this algorithm can speed up the renderi performance by at least 30% and reduce 50% transferred data in the common cases.

  18. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  19. Facilitating the design of multidimensional and local transfer functions for volume visualization

    NARCIS (Netherlands)

    Sereda, P.

    2007-01-01

    The importance of volume visualization is increasing since the sizes of the datasets that need to be inspected grow with every new version of medical scanners (e.g., CT and MR). Direct volume rendering is a 3D visualization technique that has, in many cases, clear benefits over 2D views. It is able

  20. Evaluating Approaches to Rendering Braille Text on a High-Density Pin Display.

    Science.gov (United States)

    Morash, Valerie S; Russomanno, Alexander; Gillespie, R Brent; OModhrain, Sile

    2017-10-13

    Refreshable displays for tactile graphics are typically composed of pins that have smaller diameters and spacing than standard braille dots. We investigated configurations of high-density pins to form braille text on such displays using non-refreshable stimuli produced with a 3D printer. Normal dot braille (diameter 1.5 mm) was compared to high-density dot braille (diameter 0.75 mm) wherein each normal dot was rendered by high-density simulated pins alone or in a cluster of pins configured in a diamond, X, or square; and to "blobs" that could result from covering normal braille and high-density multi-pin configurations with a thin membrane. Twelve blind participants read MNREAD sentences displayed in these conditions. For high-density simulated pins, single pins were as quickly and easily read as normal braille, but diamond, X, and square multi-pin configurations were slower and/or harder to read than normal braille. We therefore conclude that as long as center-to-center dot spacing and dot placement is maintained, the dot diameter may be open to variability for rendering braille on a high density tactile display.

  1. Mesophilic and thermophilic anaerobic co-digestion of rendering plant and slaughterhouse wastes.

    Science.gov (United States)

    Bayr, Suvi; Rantanen, Marianne; Kaparaju, Prasad; Rintala, Jukka

    2012-01-01

    Co-digestion of rendering and slaughterhouse wastes was studied in laboratory scale semi-continuously fed continuously stirred tank reactors (CSTRs) at 35 and 55 °C. All in all, 10 different rendering plant and slaughterhouse waste fractions were characterised showing high contents of lipids and proteins, and methane potentials of 262-572 dm(3)CH(4)/kg volatile solids(VS)(added). In mesophilic CSTR methane yields of ca 720 dm(3) CH(4)/kg VS(fed) were obtained with organic loading rates (OLR) of 1.0 and 1.5 kg VS/m(3) d, and hydraulic retention time (HRT) of 50 d. For thermophilic process, the lowest studied OLR of 1.5 kg VS/m(3) d, turned to be unstable after operation of 1.5 HRT, due to accumulating ammonia, volatile fatty acids (VFAs) and probably also long chain fatty acids (LCFAs). In conclusion, mesophilic process was found to be more feasible for co-digestion than thermophilic process, methane yields being higher and process more stable in mesophilic conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF

    Science.gov (United States)

    HERSHFIELD, HAL E.; GOLDSTEIN, DANIEL G.; SHARPE, WILLIAM F.; FOX, JESSE; YEYKELIS, LEO; CARSTENSEN, LAURA L.; BAILENSON, JEREMY N.

    2014-01-01

    Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones. PMID:24634544

  3. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  4. INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF.

    Science.gov (United States)

    Hershfield, Hal E; Goldstein, Daniel G; Sharpe, William F; Fox, Jesse; Yeykelis, Leo; Carstensen, Laura L; Bailenson, Jeremy N

    2011-11-01

    Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.

  5. Radionuclide cisternography: SPECT and 3D-rendering. Radionuklidzisternographie: SPECT- und 3D-Technik

    Energy Technology Data Exchange (ETDEWEB)

    Henkes, H; Huber, G; Piepgras, U [Universitaet des Saarlandes, Homburg/Saar (Germany, F.R.). Abt. fuer Neuroradiologie; Hierholzer, J [Freie Univ. Berlin (Germany, F.R.). Strahlenklinik und Poliklinik; Cordes, M [British Columbia Univ., Vancouver, BC (Canada). Belzberg Lab. of Neuroscience

    1991-10-01

    Radionuclide cisternography is indicated in the clinical work-up for hydrocephalus, when searching for CSF leaks, and when testing whether or not intracranial cystic lesions are communicating with the adjacent subarachnoid space. This paper demonstrates the feasibility and diagnostic value of SPECT and subsequent 3D surface rendering in addition to conventional rectilinear CSF imaging in eight patients. Planar images allowed the evaluation of CSF circulation and the detection of CSF fistula. They were advantageous in examinations 48 h after application of {sup 111}In-DTPA. SPECT scans, generated 4-24 h after tracer application, were superior in the delineation of basal cisterns, especially in early scans; this was helpful in patients with pooling due to CSF fistula and in cystic lesions near the skull base. A major drawback was the limited image quality of delayed scans, when the SPECT data were degraded by a low count rate. 3D surface rendering was easily feasible from SPECT data and yielded high quality images. The presentation of the spatial distribution of nuclide-contaminated CSF proved especially helpful in the area of the basal cisterns. (orig.).

  6. Ethylene signaling renders the jasmonate response of Arabidopsis insensitive to future suppression by salicylic Acid.

    Science.gov (United States)

    Leon-Reyes, Antonio; Du, Yujuan; Koornneef, Annemart; Proietti, Silvia; Körbes, Ana P; Memelink, Johan; Pieterse, Corné M J; Ritsema, Tita

    2010-02-01

    Cross-talk between jasmonate (JA), ethylene (ET), and Salicylic acid (SA) signaling is thought to operate as a mechanism to fine-tune induced defenses that are activated in response to multiple attackers. Here, 43 Arabidopsis genotypes impaired in hormone signaling or defense-related processes were screened for their ability to express SA-mediated suppression of JA-responsive gene expression. Mutant cev1, which displays constitutive expression of JA and ET responses, appeared to be insensitive to SA-mediated suppression of the JA-responsive marker genes PDF1.2 and VSP2. Accordingly, strong activation of JA and ET responses by the necrotrophic pathogens Botrytis cinerea and Alternaria brassicicola prior to SA treatment counteracted the ability of SA to suppress the JA response. Pharmacological assays, mutant analysis, and studies with the ET-signaling inhibitor 1-methylcyclopropene revealed that ET signaling renders the JA response insensitive to subsequent suppression by SA. The APETALA2/ETHYLENE RESPONSE FACTOR transcription factor ORA59, which regulates JA/ET-responsive genes such as PDF1.2, emerged as a potential mediator in this process. Collectively, our results point to a model in which simultaneous induction of the JA and ET pathway renders the plant insensitive to future SA-mediated suppression of JA-dependent defenses, which may prioritize the JA/ET pathway over the SA pathway during multi-attacker interactions.

  7. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  8. Sophisticated visualization algorithms for analysis of multidimensional experimental nuclear spectra

    International Nuclear Information System (INIS)

    Morhac, M.; Kliman, J.; Matousek, V.; Turzo, I.

    2004-01-01

    This paper describes graphical models of visualization of 2-, 3-, 4-dimensional scalar data used in nuclear data acquisition, processing and visualization system developed at the Institute of Physics, Slovak Academy of Sciences. It focuses on presentation of nuclear spectra (histograms). However it can be successfully applied for visualization of arrays of other data types. In the paper we present conventional as well as new developed surface and volume rendering visualization techniques used (Authors)

  9. Method and apparatus for imaging volume data

    International Nuclear Information System (INIS)

    Drebin, R.; Carpenter, L.C.

    1987-01-01

    An imaging system projects a two dimensional representation of three dimensional volumes where surface boundaries and objects internal to the volumes are readily shown, and hidden surfaces and the surface boundaries themselves are accurately rendered by determining volume elements or voxels. An image volume representing a volume object or data structure is written into memory. A color and opacity is assigned to each voxel within the volume and stored as a red (R), green (G), blue (B), and opacity (A) component, three dimensional data volume. The RGBA assignment for each voxel is determined based on the percentage component composition of the materials represented in the volume, and thus, the percentage of color and transparency associated with those materials. The voxels in the RGBA volume are used as mathematical filters such that each successive voxel filter is overlayed over a prior background voxel filter. Through a linear interpolation, a new background filter is determined and generated. The interpolation is successively performed for all voxels up to the front most voxel for the plane of view. The method is repeated until all display voxels are determined for the plane of view. (author)

  10. Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp

    International Nuclear Information System (INIS)

    Bayr, S.; Ojanperä, M.; Kaparaju, P.; Rintala, J.

    2014-01-01

    Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m 3 d, HRT of 50 d was unstable in mono-digestion. • Free NH 3 inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m 3 d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55 °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH 4 -N and/or free NH 3 ) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m 3 d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm 3 /kg VS fed . On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm 3 /kg VS fed ). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials

  11. Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp

    Energy Technology Data Exchange (ETDEWEB)

    Bayr, S., E-mail: suvi.bayr@jyu.fi; Ojanperä, M.; Kaparaju, P.; Rintala, J.

    2014-10-15

    Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was unstable in mono-digestion. • Free NH{sub 3} inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55 °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH{sub 4}-N and/or free NH{sub 3}) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m{sup 3} d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm{sup 3}/kg VS{sub fed}. On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm{sup 3}/kg VS{sub fed}). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials.

  12. Generalized Partial Volume

    DEFF Research Database (Denmark)

    Darkner, Sune; Sporring, Jon

    2011-01-01

    Mutual Information (MI) and normalized mutual information (NMI) are popular choices as similarity measure for multimodal image registration. Presently, one of two approaches is often used for estimating these measures: The Parzen Window (PW) and the Generalized Partial Volume (GPV). Their theoret...... of view as well as w.r.t. computational complexity. Finally, we present algorithms for both approaches for NMI which is comparable in speed to Sum of Squared Differences (SSD), and we illustrate the differences between PW and GPV on a number of registration examples....

  13. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  14. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  15. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    Science.gov (United States)

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  16. Rendering Intelligence at Physical Layer for Smart Addressing and Multiple Access

    DEFF Research Database (Denmark)

    Sanyal, Rajarshi; Prasad, Ramjee; Cianca, Ernestina

    2010-01-01

    addressing of a node. For a typical closed user group type of network, we propose a multiple access mechanism and network topology which will not only eliminate the need of intelligent core network equipments in the network area , but to use this intelligent physical layer to directly reach any node over......The primary objective of this work is to propose a technique of wireless communication, where we render intelligence to the physical layer. We aim to realize a physical layer that can take part in some processes which is otherwise confined to higher layer signalling activities, like for example...... the fundamentals behind the proposed multiple access scheme and draws out the benefits compared to the existing multiple access processes based on cellular approach....

  17. The Role of the Patrimonial Result Account in Rendering Performance in the Secondary Educational Institutions

    Directory of Open Access Journals (Sweden)

    Daniela Vitan

    2016-01-01

    Full Text Available Through its tools, in particular through patrimonial result account, accounting gives information about the performance in the secondary educational institutions and beyond. In the work ”Informational valence regarding the role of the patrimonial result account in rendering performance at secondary educational institutions” it is presented an analysis model of the performance in the pre-university education institutions based on the economic-financial indicators. These indicator simply an analysis of the dynamics and structure of revenues, costs, results and enable the knowledge of the resource management in order to cover expenses from the revenue, their evolution and their total balance. The analysis was based on hypothetical data over a period of two years, and after this period it was determined that the institution has managed to maintain its level of efficiency in resource management.

  18. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Dal Corso, Alessandro; Nielsen, Jannik Boll

    2017-01-01

    of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content......Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces....... This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging...

  19. Color design model of high color rendering index white-light LED module.

    Science.gov (United States)

    Ying, Shang-Ping; Fu, Han-Kuei; Hsieh, Hsin-Hsin; Hsieh, Kun-Yang

    2017-05-10

    The traditional white-light light-emitting diode (LED) is packaged with a single chip and a single phosphor but has a poor color rendering index (CRI). The next-generation package comprises two chips and a single phosphor, has a high CRI, and retains high luminous efficacy. This study employs two chips and two phosphors to improve the diode's color tunability with various proportions of two phosphors and various densities of phosphor in the silicone used. A color design model is established for color fine-tuning of the white-light LED module. The maximum difference between the measured and color-design-model simulated CIE 1931 color coordinates is approximately 0.0063 around a correlated color temperature (CCT) of 2500 K. This study provides a rapid method to obtain the color fine-tuning of a white-light LED module with a high CRI and luminous efficacy.

  20. Three-dimensional range data compression using computer graphics rendering pipeline.

    Science.gov (United States)

    Zhang, Song

    2012-06-20

    This paper presents the idea of naturally encoding three-dimensional (3D) range data into regular two-dimensional (2D) images utilizing computer graphics rendering pipeline. The computer graphics pipeline provides a means to sample 3D geometry data into regular 2D images, and also to retrieve the depth information for each sampled pixel. The depth information for each pixel is further encoded into red, green, and blue color channels of regular 2D images. The 2D images can further be compressed with existing 2D image compression techniques. By this novel means, 3D geometry data obtained by 3D range scanners can be instantaneously compressed into 2D images, providing a novel way of storing 3D range data into its 2D counterparts. We will present experimental results to verify the performance of this proposed technique.

  1. Can thiolation render a low molecular weight polymer of just 20-kDa mucoadhesive?

    Science.gov (United States)

    Mahmood, Arshad; Bonengel, Sonja; Laffleur, Flavia; Ijaz, Muhammad; Idrees, Muneeb Ahmad; Hussain, Shah; Huck, Christian W; Matuszczak, Barbara; Bernkop-Schnürch, Andreas

    2016-01-01

    The objective was to investigate whether even low-molecular weight polymers (LMWPs) can be rendered mucoadhesive due to thiolation. Interceded by the double catalytic system carbodiimide/N-hydroxysuccinimide, cysteamine was covalently attached to a copolymer, poly(4-styrenesulfonic acid-co-maleic acid) (PSSA-MA) exhibiting a molecular weight of just 20 kDa. Depending on the amount of added N-hydroxysuccinimide and cysteamine, the resulting PSSA-MA-cysteamine (PC) conjugates exhibited increasing degree of thiolation, highest being "PC 2300" exhibiting 2300.16 ± 149.86 μmol thiol groups per gram of polymer (mean ± SD; n = 3). This newly developed thiolated polymer was evaluated regarding mucoadhesive, rheological and drug release properties as well from the toxicological point of view. Swelling behavior in 100 mM phosphate buffer pH 6.8 was improved up to 180-fold. Furthermore, due to thiolation, the mucoadhesive properties of the polymer were 240-fold improved. Rheological measurements of polymer/mucus mixtures confirmed results obtained by mucoadhesion studies. In comparison to unmodified polymer, PC 2300 showed 2.3-, 2.3- and 2.4-fold increase in dynamic viscosity, elastic modulus and viscous modulus, respectively. Sustained release of the model drug codeine HCl out of the thiomer was provided for 2.5 h (p polymer. Moreover, the thiomer was found non-toxic over Caco-2 cells for a period of 6- and 24-h exposure. Findings of the present study provide evidence that due to thiolation LMWPs can be rendered highly mucoadhesive as well as cohesive and that a controlled drug release out of such polymers can be provided.

  2. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  3. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  4. Medical review practices for driver licensing volume 2: case studies of medical referrals and licensing outcomes in Maine, Ohio, Oregon, Texas, Washington, and Wisconsin.

    Science.gov (United States)

    2017-03-01

    This is the second of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically at-risk : drivers. This volume pre...

  5. Effects of MRI Protocol Parameters, Preload Injection Dose, Fractionation Strategies, and Leakage Correction Algorithms on the Fidelity of Dynamic-Susceptibility Contrast MRI Estimates of Relative Cerebral Blood Volume in Gliomas.

    Science.gov (United States)

    Leu, K; Boxerman, J L; Ellingson, B M

    2017-03-01

    DSC perfusion MR imaging assumes that the contrast agent remains intravascular; thus, disruptions in the blood-brain barrier common in brain tumors can lead to errors in the estimation of relative CBV. Acquisition strategies, including the choice of flip angle, TE, TR, and preload dose and incubation time, along with post hoc leakage-correction algorithms, have been proposed as means for combating these leakage effects. In the current study, we used DSC-MR imaging simulations to examine the influence of these various acquisition parameters and leakage-correction strategies on the faithful estimation of CBV. DSC-MR imaging simulations were performed in 250 tumors with perfusion characteristics randomly generated from the distributions of real tumor population data, and comparison of leakage-corrected CBV was performed with a theoretic curve with no permeability. Optimal strategies were determined by protocol with the lowest mean error. The following acquisition strategies (flip angle/TE/TR and contrast dose allocation for preload and bolus) produced high CBV fidelity, as measured by the percentage difference from a hypothetic tumor with no leakage: 1) 35°/35 ms/1.5 seconds with no preload and full dose for DSC-MR imaging, 2) 35°/25 ms/1.5 seconds with ¼ dose preload and ¾ dose bolus, 3) 60°/35 ms/2.0 seconds with ½ dose preload and ½ dose bolus, and 4) 60°/35 ms/1.0 second with 1 dose preload and 1 dose bolus. Results suggest that a variety of strategies can yield similarly high fidelity in CBV estimation, namely those that balance T1- and T2*-relaxation effects due to contrast agent extravasation. © 2017 by American Journal of Neuroradiology.

  6. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  7. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  8. Aperiodic Volume Optics

    Science.gov (United States)

    Gerke, Tim D.

    Presented in this thesis is an investigation into aperiodic volume optical devices. The three main topics of research and discussion are the aperiodic volume optical devices that we call computer-generated volume holograms (CGVH), defects within periodic 3D photonic crystals, and non-periodic, but ordered 3D quasicrystals. The first of these devices, CGVHs, are designed and investigated numerically and experimentally. We study the performance of multi-layered amplitude computer-generated volume holograms in terms of efficiency and angular/frequency selectivity. Simulation results show that such aperiodic devices can increase diffraction efficiency relative to periodic amplitude volume holograms while maintaining angular and wavelength selectivity. CGVHs are also designed as voxelated volumes using a new projection optimization algorithm. They are investigated using a volumetric diffraction simulation and a standard 3D beam propagation technique as well as experimentally. Both simulation and experiment verify that the structures function according to their design. These represent the first diffractive structures that have the capacity for generating arbitrary transmission and reflection wave fronts and that provide the ability for multiplexing arbitrary functionality given different illumination conditions. Also investigated and discussed in this thesis are 3D photonic crystals and quasicrystals. We demonstrate that these devices can be fabricated using a femtosecond laser direct writing system that is particularly appropriate for fabrication of such arbitrary 3D structures. We also show that these devices can provide 3D partial bandgaps which could become complete bandgaps if fabricated using high index materials or by coating lower index materials with high index metals. Our fabrication method is particularly suited to the fabrication of engineered defects within the periodic or quasi-periodic systems. We demonstrate the potential for fabricating defects within

  9. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  10. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  11. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  12. Pure JavaScript Storyline Layout Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    2017-10-02

    This is a JavaScript library for a storyline layout algorithm. Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The library in this disclosure takes as input a list of objects containing an id, time, and state. The output is a data structure that can be used to conveniently render a storyline visualization. Most importantly, the library computes the y-coordinate of the entities over time that decreases layout artifacts including crossings, wiggles, and whitespace. This is accomplished through multi-objective, multi-stage optimization problem, where the output of one stage produces input and constraints for the next stage.

  13. Linear programming algorithms and applications

    CERN Document Server

    Vajda, S

    1981-01-01

    This text is based on a course of about 16 hours lectures to students of mathematics, statistics, and/or operational research. It is intended to introduce readers to the very wide range of applicability of linear programming, covering problems of manage­ ment, administration, transportation and a number of other uses which are mentioned in their context. The emphasis is on numerical algorithms, which are illustrated by examples of such modest size that the solutions can be obtained using pen and paper. It is clear that these methods, if applied to larger problems, can also be carried out on automatic (electronic) computers. Commercially available computer packages are, in fact, mainly based on algorithms explained in this book. The author is convinced that the user of these algorithms ought to be knowledgeable about the underlying theory. Therefore this volume is not merely addressed to the practitioner, but also to the mathematician who is interested in relatively new developments in algebraic theory and in...

  14. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  15. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    Science.gov (United States)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  16. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  17. Optimized Data Indexing Algorithms for OLAP Systems

    Directory of Open Access Journals (Sweden)

    Lucian BORNAZ

    2010-12-01

    Full Text Available The need to process and analyze large data volumes, as well as to convey the information contained therein to decision makers naturally led to the development of OLAP systems. Similarly to SGBDs, OLAP systems must ensure optimum access to the storage environment. Although there are several ways to optimize database systems, implementing a correct data indexing solution is the most effective and less costly. Thus, OLAP uses indexing algorithms for relational data and n-dimensional summarized data stored in cubes. Today database systems implement derived indexing algorithms based on well-known Tree, Bitmap and Hash indexing algorithms. This is because no indexing algorithm provides the best performance for any particular situation (type, structure, data volume, application. This paper presents a new n-dimensional cube indexing algorithm, derived from the well known B-Tree index, which indexes data stored in data warehouses taking in consideration their multi-dimensional nature and provides better performance in comparison to the already implemented Tree-like index types.

  18. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  19. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  20. A Study of Layout, Rendering, and Interaction Methods for Immersive Graph Visualization.

    Science.gov (United States)

    Kwon, Oh-Hyun; Muelder, Chris; Lee, Kyungwon; Ma, Kwan-Liu

    2016-07-01

    Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.