WorldWideScience

Sample records for models imaging simulation

  1. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    Science.gov (United States)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the

  2. Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2002-01-01

    A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the x-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom

  3. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  4. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  5. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  6. Can quantum imaging be classically simulated?

    OpenAIRE

    D'Angelo, Milena; Shih, Yanhua

    2003-01-01

    Quantum imaging has been demonstrated since 1995 by using entangled photon pairs. The physics community named these experiments "ghost image", "quantum crypto-FAX", "ghost interference", etc. Recently, Bennink et al. simulated the "ghost" imaging experiment by two co-rotating k-vector correlated lasers. Did the classical simulation simulate the quantum aspect of the "ghost" image? We wish to provide an answer. In fact, the simulation is very similar to a historical model of local realism. The...

  7. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Directory of Open Access Journals (Sweden)

    A.-S. Høyer

    2017-12-01

    Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical

  8. Automobile simulation model and its identification. Behavior measuring by image processing; Jidosha simulation model to dotei jikken. Gazo kaiseki ni yoru undo no keisoku

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, H; Morita, S; Matsuura, Y [Osaka Sangyo University, Osaka (Japan)

    1997-10-01

    Model simulation technology is important for automobiles development. Especially, for the investigations concerning to ABS, TRC, VDC, and so on, the model should be the one which can simulates not only whole behaviors of the automobile, but also such internal information as torque, acceleration, and, velocity of each drive shafts, etc.. From this point of view, 4-wheels simulation model which can simulates almost over 50 items, was made. On the other hand, technique of 3-D image processing using 2 video cameras was adopted to identify the model. Considerably good coincidences were recognized between the simulated values and measured ones. 3 refs., 7 figs., 2 tabs.

  9. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    International Nuclear Information System (INIS)

    Redler, G; Cifter, G; Templeton, A; Lee, C; Bernard, D; Liao, Y; Zhen, H; Turian, J; Chu, J

    2016-01-01

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated

  10. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Redler, G; Cifter, G; Templeton, A; Lee, C; Bernard, D; Liao, Y; Zhen, H; Turian, J; Chu, J [Rush University Medical Center, Chicago, IL (United States)

    2016-06-15

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated

  11. Discrete Event Simulation Model of the Polaris 2.1 Gamma Ray Imaging Radiation Detection Device

    Science.gov (United States)

    2016-06-01

    release; distribution is unlimited DISCRETE EVENT SIMULATION MODEL OF THE POLARIS 2.1 GAMMA RAY IMAGING RADIATION DETECTION DEVICE by Andres T...ONLY (Leave blank) 2. REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE DISCRETE EVENT SIMULATION MODEL...modeled. The platform, Simkit, was utilized to create a discrete event simulation (DES) model of the Polaris. After carefully constructing the DES

  12. Bio-imaging and visualization for patient-customized simulations

    CERN Document Server

    Luo, Xiongbiao; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Bio-Imaging and Visualization for Patient-Customized Simulations (MWBIVPCS 2013). MWBIVPCS 2013 brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The contributions included in this book present and discuss new trends in those fields, using several methods and techniques, including the finite element method, similarity metrics, optimization processes, graphs, hidden Markov models, sensor calibration, fuzzy logic, data mining, cellular automation, active shape models, template matching and level sets. These serve as tools to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.  This boo...

  13. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  14. Simulations, Imaging, and Modeling: A Unique Theme for an Undergraduate Research Program in Biomechanics.

    Science.gov (United States)

    George, Stephanie M; Domire, Zachary J

    2017-07-01

    As the reliance on computational models to inform experiments and evaluate medical devices grows, the demand for students with modeling experience will grow. In this paper, we report on the 3-yr experience of a National Science Foundation (NSF) funded Research Experiences for Undergraduates (REU) based on the theme simulations, imaging, and modeling in biomechanics. While directly applicable to REU sites, our findings also apply to those creating other types of summer undergraduate research programs. The objective of the paper is to examine if a theme of simulations, imaging, and modeling will improve students' understanding of the important topic of modeling, provide an overall positive research experience, and provide an interdisciplinary experience. The structure of the program and the evaluation plan are described. We report on the results from 25 students over three summers from 2014 to 2016. Overall, students reported significant gains in the knowledge of modeling, research process, and graduate school based on self-reported mastery levels and open-ended qualitative responses. This theme provides students with a skill set that is adaptable to other applications illustrating the interdisciplinary nature of modeling in biomechanics. Another advantage is that students may also be able to continue working on their project following the summer experience through network connections. In conclusion, we have described the successful implementation of the theme simulation, imaging, and modeling for an REU site and the overall positive response of the student participants.

  15. Development of digital phantoms based on a finite element model to simulate low-attenuation areas in CT imaging for pulmonary emphysema quantification.

    Science.gov (United States)

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2017-09-01

    To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.

  16. Medical Image Registration and Surgery Simulation

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten

    1996-01-01

    This thesis explores the application of physical models in medical image registration and surgery simulation. The continuum models of elasticity and viscous fluids are described in detail, and this knowledge is used as a basis for most of the methods described here. Real-time deformable models......, and the use of selective matrix vector multiplication. Fluid medical image registration A new and faster algorithm for non-rigid registration using viscous fluid models is presented. This algorithm replaces the core part of the original algorithm with multi-resolution convolution using a new filter, which...... growth is also presented. Using medical knowledge about the growth processes of the mandibular bone, a registration algorithm for time sequence images of the mandible is developed. Since this registration algorithm models the actual development of the mandible, it is possible to simulate the development...

  17. OntoVIP: an ontology for the annotation of object models used for medical image simulation.

    Science.gov (United States)

    Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan

    2014-12-01

    This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    Energy Technology Data Exchange (ETDEWEB)

    Häggström, Ida, E-mail: haeggsti@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 and Department of Radiation Sciences, Umeå University, Umeå 90187 (Sweden); Beattie, Bradley J.; Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  19. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    International Nuclear Information System (INIS)

    Häggström, Ida; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-01-01

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  20. Noise simulation in cone beam CT imaging with parallel computing

    International Nuclear Information System (INIS)

    Tu, S.-J.; Shaw, Chris C; Chen, Lingyun

    2006-01-01

    We developed a computer noise simulation model for cone beam computed tomography imaging using a general purpose PC cluster. This model uses a mono-energetic x-ray approximation and allows us to investigate three primary performance components, specifically quantum noise, detector blurring and additive system noise. A parallel random number generator based on the Weyl sequence was implemented in the noise simulation and a visualization technique was accordingly developed to validate the quality of the parallel random number generator. In our computer simulation model, three-dimensional (3D) phantoms were mathematically modelled and used to create 450 analytical projections, which were then sampled into digital image data. Quantum noise was simulated and added to the analytical projection image data, which were then filtered to incorporate flat panel detector blurring. Additive system noise was generated and added to form the final projection images. The Feldkamp algorithm was implemented and used to reconstruct the 3D images of the phantoms. A 24 dual-Xeon PC cluster was used to compute the projections and reconstructed images in parallel with each CPU processing 10 projection views for a total of 450 views. Based on this computer simulation system, simulated cone beam CT images were generated for various phantoms and technique settings. Noise power spectra for the flat panel x-ray detector and reconstructed images were then computed to characterize the noise properties. As an example among the potential applications of our noise simulation model, we showed that images of low contrast objects can be produced and used for image quality evaluation

  1. Research on simulated infrared image utility evaluation using deep representation

    Science.gov (United States)

    Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin

    2018-01-01

    Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.

  2. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    International Nuclear Information System (INIS)

    Dolly, S; Mutic, S; Anastasio, M; Li, H; Yu, L

    2016-01-01

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework was developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation

  3. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    Science.gov (United States)

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  4. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  5. Simulation of ultrasound backscatter images from fish

    DEFF Research Database (Denmark)

    Pham, An Hoai

    2011-01-01

    The objective of this work is to investigate ultrasound (US) backscatter in the MHz range from fis to develop a realistic and reliable simulation model. The long term objective of the work is to develop the needed signal processing for fis species differentiation using US. In in-vitro experiments...... is 10 MHz and the Full Width at Half Maximum (FWHM) at the focus point is 0.54 mm in the lateral direction. The transducer model in Field II was calibrated using a wire phantom to validate the simulated point spread function. The inputs to the simulation were the CT image data of the fis converted......, a cod (Gadus morhua) was scanned with both a BK Medical ProFocus 2202 ultrasound scanner and a Toshiba Aquilion ONE computed tomography (CT) scanner. The US images of the fis were compared with US images created using the ultrasound simulation program Field II. The center frequency of the transducer...

  6. Design and simulation of a totally digital image system for medical image applications

    International Nuclear Information System (INIS)

    Archwamety, C.

    1987-01-01

    The Totally Digital Imaging System (TDIS) is based on system requirements information from the Radiology Department, University of Arizona Health Science Center. This dissertation presents the design of this complex system, the TDIS specification, the system performance requirements, and the evaluation of the system using the computer-simulation programs. Discrete-event simulation models were developed for the TDIS subsystems, including an image network, imaging equipment, storage migration algorithm, data base archive system, and a control and management network. The simulation system uses empirical data generation and retrieval rates measured at the University Medical Center hospital. The entire TDIS system was simulated in Simscript II.5 using a VAX 8600 computer system. Simulation results show the fiber-optical-image network to be suitable; however, the optical-disk-storage system represents a performance bottleneck

  7. Monte Carlo simulations in small animal PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Branco, Susana [Universidade de Lisboa, Faculdade de Ciencias, Instituto de Biofisica e Engenharia Biomedica, Lisbon (Portugal)], E-mail: susana.silva@fc.ul.pt; Jan, Sebastien [Service Hospitalier Frederic Joliot, CEA/DSV/DRM, Orsay (France); Almeida, Pedro [Universidade de Lisboa, Faculdade de Ciencias, Instituto de Biofisica e Engenharia Biomedica, Lisbon (Portugal)

    2007-10-01

    This work is based on the use of an implemented Positron Emission Tomography (PET) simulation system dedicated for small animal PET imaging. Geant4 Application for Tomographic Emission (GATE), a Monte Carlo simulation platform based on the Geant4 libraries, is well suited for modeling the microPET FOCUS system and to implement realistic phantoms, such as the MOBY phantom, and data maps from real examinations. The use of a microPET FOCUS simulation model with GATE has been validated for spatial resolution, counting rates performances, imaging contrast recovery and quantitative analysis. Results from realistic studies of the mouse body using {sup -}F and [{sup 18}F]FDG imaging protocols are presented. These simulations include the injection of realistic doses into the animal and realistic time framing. The results have shown that it is possible to simulate small animal PET acquisitions under realistic conditions, and are expected to be useful to improve the quantitative analysis in PET mouse body studies.

  8. Hyperspectral imaging simulation of object under sea-sky background

    Science.gov (United States)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  9. Radar Echo Scattering Modeling and Image Simulations of Full-scale Convex Rough Targets at Terahertz Frequencies

    Directory of Open Access Journals (Sweden)

    Gao Jingkun

    2018-02-01

    Full Text Available Echo simulation is a precondition for developing radar imaging systems, algorithms, and subsequent applications. Electromagnetic scattering modeling of the target is key to echo simulation. At terahertz (THz frequencies, targets are usually of ultra-large electrical size that makes applying classical electromagnetic calculation methods unpractical. In contrast, the short wavelength makes the surface roughness of targets a factor that cannot be ignored, and this makes the traditional echo simulation methods based on point scattering hypothesis in applicable. Modeling the scattering characteristics of targets and efficiently generating its radar echoes in THz bands has become a problem that must be solved. In this paper, a hierarchical semi-deterministic modeling method is proposed. A full-wave algorithm of rough surfaces is used to calculate the scattered field of facets. Then, the scattered fields of all facets are transformed into the target coordinate system and coherently summed. Finally, the radar echo containing phase information can be obtained. Using small-scale rough models, our method is compared with the standard high-frequency numerical method, which verifies the effectiveness of the proposed method. Imaging results of a full-scale cone-shape target is presented, and the scattering model and echo generation problem of the full-scale convex targets with rough surfaces in THz bands are preliminary solved; this lays the foundation for future research on imaging regimes and algorithms.

  10. Image-based model of the spectrin cytoskeleton for red blood cell simulation.

    Science.gov (United States)

    Fai, Thomas G; Leo-Macias, Alejandra; Stokes, David L; Peskin, Charles S

    2017-10-01

    We simulate deformable red blood cells in the microcirculation using the immersed boundary method with a cytoskeletal model that incorporates structural details revealed by tomographic images. The elasticity of red blood cells is known to be supplied by both their lipid bilayer membranes, which resist bending and local changes in area, and their cytoskeletons, which resist in-plane shear. The cytoskeleton consists of spectrin tetramers that are tethered to the lipid bilayer by ankyrin and by actin-based junctional complexes. We model the cytoskeleton as a random geometric graph, with nodes corresponding to junctional complexes and with edges corresponding to spectrin tetramers such that the edge lengths are given by the end-to-end distances between nodes. The statistical properties of this graph are based on distributions gathered from three-dimensional tomographic images of the cytoskeleton by a segmentation algorithm. We show that the elastic response of our model cytoskeleton, in which the spectrin polymers are treated as entropic springs, is in good agreement with the experimentally measured shear modulus. By simulating red blood cells in flow with the immersed boundary method, we compare this discrete cytoskeletal model to an existing continuum model and predict the extent to which dynamic spectrin network connectivity can protect against failure in the case of a red cell subjected to an applied strain. The methods presented here could form the basis of disease- and patient-specific computational studies of hereditary diseases affecting the red cell cytoskeleton.

  11. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  12. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  13. Satellite image simulations for model-supervised, dynamic retrieval of crop type and land use intensity

    Science.gov (United States)

    Bach, H.; Klug, P.; Ruf, T.; Migdall, S.; Schlenz, F.; Hank, T.; Mauser, W.

    2015-04-01

    To support food security, information products about the actual cropping area per crop type, the current status of agricultural production and estimated yields, as well as the sustainability of the agricultural management are necessary. Based on this information, well-targeted land management decisions can be made. Remote sensing is in a unique position to contribute to this task as it is globally available and provides a plethora of information about current crop status. M4Land is a comprehensive system in which a crop growth model (PROMET) and a reflectance model (SLC) are coupled in order to provide these information products by analyzing multi-temporal satellite images. SLC uses modelled surface state parameters from PROMET, such as leaf area index or phenology of different crops to simulate spatially distributed surface reflectance spectra. This is the basis for generating artificial satellite images considering sensor specific configurations (spectral bands, solar and observation geometries). Ensembles of model runs are used to represent different crop types, fertilization status, soil colour and soil moisture. By multi-temporal comparisons of simulated and real satellite images, the land cover/crop type can be classified in a dynamically, model-supervised way and without in-situ training data. The method is demonstrated in an agricultural test-site in Bavaria. Its transferability is studied by analysing PROMET model results for the rest of Germany. Especially the simulated phenological development can be verified on this scale in order to understand whether PROMET is able to adequately simulate spatial, as well as temporal (intra- and inter-season) crop growth conditions, a prerequisite for the model-supervised approach. This sophisticated new technology allows monitoring of management decisions on the field-level using high resolution optical data (presently RapidEye and Landsat). The M4Land analysis system is designed to integrate multi-mission data and is

  14. Simulated annealing image reconstruction for positron emission tomography

    International Nuclear Information System (INIS)

    Sundermann, E.; Lemahieu, I.; Desmedt, P.

    1994-01-01

    In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors)

  15. Realistic simulation of reduced-dose CT with noise modeling and sinogram synthesis using DICOM CT images

    International Nuclear Information System (INIS)

    Won Kim, Chang; Kim, Jong Hyo

    2014-01-01

    Purpose: Reducing the patient dose while maintaining the diagnostic image quality during CT exams is the subject of a growing number of studies, in which simulations of reduced-dose CT with patient data have been used as an effective technique when exploring the potential of various dose reduction techniques. Difficulties in accessing raw sinogram data, however, have restricted the use of this technique to a limited number of institutions. Here, we present a novel reduced-dose CT simulation technique which provides realistic low-dose images without the requirement of raw sinogram data. Methods: Two key characteristics of CT systems, the noise equivalent quanta (NEQ) and the algorithmic modulation transfer function (MTF), were measured for various combinations of object attenuation and tube currents by analyzing the noise power spectrum (NPS) of CT images obtained with a set of phantoms. Those measurements were used to develop a comprehensive CT noise model covering the reduced x-ray photon flux, object attenuation, system noise, and bow-tie filter, which was then employed to generate a simulated noise sinogram for the reduced-dose condition with the use of a synthetic sinogram generated from a reference CT image. The simulated noise sinogram was filtered with the algorithmic MTF and back-projected to create a noise CT image, which was then added to the reference CT image, finally providing a simulated reduced-dose CT image. The simulation performance was evaluated in terms of the degree of NPS similarity, the noise magnitude, the bow-tie filter effect, and the streak noise pattern at photon starvation sites with the set of phantom images. Results: The simulation results showed good agreement with actual low-dose CT images in terms of their visual appearance and in a quantitative evaluation test. The magnitude and shape of the NPS curves of the simulated low-dose images agreed well with those of real low-dose images, showing discrepancies of less than +/−3.2% in

  16. Images created in a model eye during simulated cataract surgery can be the basis for images perceived by patients during cataract surgery

    Science.gov (United States)

    Inoue, M; Uchida, A; Shinoda, K; Taira, Y; Noda, T; Ohnuma, K; Bissen-Miyajima, H; Hirakata, A

    2014-01-01

    Purpose To evaluate the images created in a model eye during simulated cataract surgery. Patients and methods This study was conducted as a laboratory investigation and interventional case series. An artificial opaque lens, a clear intraocular lens (IOL), or an irrigation/aspiration (I/A) tip was inserted into the ‘anterior chamber' of a model eye with the frosted posterior surface corresponding to the retina. Video images were recorded of the posterior surface of the model eye from the rear during simulated cataract surgery. The video clips were shown to 20 patients before cataract surgery, and the similarity of their visual perceptions to these images was evaluated postoperatively. Results The images of the moving lens fragments and I/A tip and the insertion of the IOL were seen from the rear. The image through the opaque lens and the IOL without moving objects was the light of the surgical microscope from the rear. However, when the microscope light was turned off after IOL insertion, the images of the microscope and operating room were observed by the room illumination from the rear. Seventy percent of the patients answered that the visual perceptions of moving lens fragments were similar to the video clips and 55% reported similarity with the IOL insertion. Eighty percent of the patients recommended that patients watch the video clip before their scheduled cataract surgery. Conclusions The patients' visual perceptions during cataract surgery can be reproduced in the model eye. Watching the video images preoperatively may help relax the patients during surgery. PMID:24788007

  17. Simulated annealing image reconstruction for positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sundermann, E; Lemahieu, I; Desmedt, P [Department of Electronics and Information Systems, University of Ghent, St. Pietersnieuwstraat 41, B-9000 Ghent, Belgium (Belgium)

    1994-12-31

    In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors). 11 refs., 2 figs.

  18. Validation of the GATE Monte Carlo simulation platform for modelling a CsI(Tl) scintillation camera dedicated to small-animal imaging

    International Nuclear Information System (INIS)

    Lazaro, D; Buvat, I; Loudos, G; Strul, D; Santin, G; Giokaris, N; Donnarieix, D; Maigne, L; Spanoudaki, V; Styliaris, S; Staelens, S; Breton, V

    2004-01-01

    Monte Carlo simulations are increasingly used in scintigraphic imaging to model imaging systems and to develop and assess tomographic reconstruction algorithms and correction methods for improved image quantitation. GATE (GEANT4 application for tomographic emission) is a new Monte Carlo simulation platform based on GEANT4 dedicated to nuclear imaging applications. This paper describes the GATE simulation of a prototype of scintillation camera dedicated to small-animal imaging and consisting of a CsI(Tl) crystal array coupled to a position-sensitive photomultiplier tube. The relevance of GATE to model the camera prototype was assessed by comparing simulated 99m Tc point spread functions, energy spectra, sensitivities, scatter fractions and image of a capillary phantom with the corresponding experimental measurements. Results showed an excellent agreement between simulated and experimental data: experimental spatial resolutions were predicted with an error less than 100 μm. The difference between experimental and simulated system sensitivities for different source-to-collimator distances was within 2%. Simulated and experimental scatter fractions in a [98-82 keV] energy window differed by less than 2% for sources located in water. Simulated and experimental energy spectra agreed very well between 40 and 180 keV. These results demonstrate the ability and flexibility of GATE for simulating original detector designs. The main weakness of GATE concerns the long computation time it requires: this issue is currently under investigation by the GEANT4 and the GATE collaborations

  19. Automatic construction of 3D-ASM intensity models by simulating image acquisition: application to myocardial gated SPECT studies.

    Science.gov (United States)

    Tobon-Gomez, Catalina; Butakoff, Constantine; Aguade, Santiago; Sukno, Federico; Moragas, Gloria; Frangi, Alejandro F

    2008-11-01

    Active shape models bear a great promise for model-based medical image analysis. Their practical use, though, is undermined due to the need to train such models on large image databases. Automatic building of point distribution models (PDMs) has been successfully addressed and a number of autolandmarking techniques are currently available. However, the need for strategies to automatically build intensity models around each landmark has been largely overlooked in the literature. This work demonstrates the potential of creating intensity models automatically by simulating image generation. We show that it is possible to reuse a 3D PDM built from computed tomography (CT) to segment gated single photon emission computed tomography (gSPECT) studies. Training is performed on a realistic virtual population where image acquisition and formation have been modeled using the SIMIND Monte Carlo simulator and ASPIRE image reconstruction software, respectively. The dataset comprised 208 digital phantoms (4D-NCAT) and 20 clinical studies. The evaluation is accomplished by comparing point-to-surface and volume errors against a proper gold standard. Results show that gSPECT studies can be successfully segmented by models trained under this scheme with subvoxel accuracy. The accuracy in estimated LV function parameters, such as end diastolic volume, end systolic volume, and ejection fraction, ranged from 90.0% to 94.5% for the virtual population and from 87.0% to 89.5% for the clinical population.

  20. Signal and image processing systems performance evaluation, simulation, and modeling; Proceedings of the Meeting, Orlando, FL, Apr. 4, 5, 1991

    Science.gov (United States)

    Nasr, Hatem N.; Bazakos, Michael E.

    The various aspects of the evaluation and modeling problems in algorithms, sensors, and systems are addressed. Consideration is given to a generic modular imaging IR signal processor, real-time architecture based on the image-processing module family, application of the Proto Ware simulation testbed to the design and evaluation of advanced avionics, development of a fire-and-forget imaging infrared seeker missile simulation, an adaptive morphological filter for image processing, laboratory development of a nonlinear optical tracking filter, a dynamic end-to-end model testbed for IR detection algorithms, wind tunnel model aircraft attitude and motion analysis, an information-theoretic approach to optimal quantization, parametric analysis of target/decoy performance, neural networks for automated target recognition parameters adaptation, performance evaluation of a texture-based segmentation algorithm, evaluation of image tracker algorithms, and multisensor fusion methodologies. (No individual items are abstracted in this volume)

  1. Model-based microwave image reconstruction: simulations and experiments

    International Nuclear Information System (INIS)

    Ciocan, Razvan; Jiang Huabei

    2004-01-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data

  2. Finite-element modeling of compression and gravity on a population of breast phantoms for multimodality imaging simulation.

    Science.gov (United States)

    Sturgeon, Gregory M; Kiarashi, Nooshin; Lo, Joseph Y; Samei, E; Segars, W P

    2016-05-01

    The authors are developing a series of computational breast phantoms based on breast CT data for imaging research. In this work, the authors develop a program that will allow a user to alter the phantoms to simulate the effect of gravity and compression of the breast (craniocaudal or mediolateral oblique) making the phantoms applicable to multimodality imaging. This application utilizes a template finite-element (FE) breast model that can be applied to their presegmented voxelized breast phantoms. The FE model is automatically fit to the geometry of a given breast phantom, and the material properties of each element are set based on the segmented voxels contained within the element. The loading and boundary conditions, which include gravity, are then assigned based on a user-defined position and compression. The effect of applying these loads to the breast is computed using a multistage contact analysis in FEBio, a freely available and well-validated FE software package specifically designed for biomedical applications. The resulting deformation of the breast is then applied to a boundary mesh representation of the phantom that can be used for simulating medical images. An efficient script performs the above actions seamlessly. The user only needs to specify which voxelized breast phantom to use, the compressed thickness, and orientation of the breast. The authors utilized their FE application to simulate compressed states of the breast indicative of mammography and tomosynthesis. Gravity and compression were simulated on example phantoms and used to generate mammograms in the craniocaudal or mediolateral oblique views. The simulated mammograms show a high degree of realism illustrating the utility of the FE method in simulating imaging data of repositioned and compressed breasts. The breast phantoms and the compression software can become a useful resource to the breast imaging research community. These phantoms can then be used to evaluate and compare imaging

  3. A general approach to flaw simulation in castings by superimposing projections of 3D models onto real X-ray images

    International Nuclear Information System (INIS)

    Hahn, D.; Mery, D.

    2003-01-01

    In order to evaluate the sensitivity of defect inspection systems, it is convenient to examine simulated data. This gives the possibility to tune the parameters of the inspection method and to test the performance of the system in critical cases. In this paper, a practical method for the simulation of defects in radioscopic images of aluminium castings is presented. The approach simulates only the flaws and not the whole radioscopic image of the object under test. A 3D mesh is used to model a flaw with complex geometry, which is projected and superimposed onto real radioscopic images of a homogeneous object according to the exponential attenuation law for X- rays. The new grey value of a pixel, where the 3D flaw is projected, depends only on four parameters: (a) the grey value of the original X-ray image without flaw; (b) the linear absorption coefficient of the examined material; (c) the maximal thickness observable in the radioscopic image; and (d) the length of the intersection of the 3D flaw with the modelled X-ray beam, that is projected into the pixel. A simulation of a complex flaw modelled as a 3D mesh can be performed in any position of the castings by using the algorithm described in this paper. This allows the evaluation of the performance of defect inspection systems in cases where the detection is known to be difficult. In this paper, we show experimental results on real X-ray images of aluminium wheels, in which 3D flaws like blowholes, cracks and inclusions are simulated

  4. Development of a simplified simulation model for performance characterization of a pixellated CdZnTe multimodality imaging system

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, P; Santos, A [Departamento de IngenierIa Electronica, Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Darambara, D G [Joint Department of Physics, Royal Marsden NHS Foundation Trust and The Institute of Cancer Research, Fulham Road, London SW3 6JJ (United Kingdom)], E-mail: pguerra@die.um.es

    2008-02-21

    Current requirements of molecular imaging lead to the complete integration of complementary modalities in a single hybrid imaging system to correlate function and structure. Among the various existing detector technologies, which can be implemented to integrate nuclear modalities (PET and/or single-photon emission computed tomography with x-rays (CT) and most probably with MR, pixellated wide bandgap room temperature semiconductor detectors, such as CdZnTe and/or CdTe, are promising candidates. This paper deals with the development of a simplified simulation model for pixellated semiconductor radiation detectors, as a first step towards the performance characterization of a multimodality imaging system based on CdZnTe. In particular, this work presents a simple computational model, based on a 1D approximate solution of the Schockley-Ramo theorem, and its integration into the Geant4 application for tomographic emission (GATE) platform in order to perform accurately and, therefore, improve the simulations of pixellated detectors in different configurations with a simultaneous cathode and anode pixel readout. The model presented here is successfully validated against an existing detailed finite element simulator, the multi-geometry simulation code, with respect to the charge induced at the anode, taking into consideration interpixel charge sharing and crosstalk, and to the detector charge induction efficiency. As a final point, the model provides estimated energy spectra and time resolution for {sup 57}Co and {sup 18}F sources obtained with the GATE code after the incorporation of the proposed model.

  5. New developments in simulating X-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Peterzol, A.; Berthier, J.; Duvauchelle, P.; Babot, D.; Ferrero, C.

    2007-01-01

    A deterministic algorithm simulating phase contrast (PC) x-ray images for complex 3- dimensional (3D) objects is presented. This algorithm has been implemented in a simulation code named VXI (Virtual X-ray Imaging). The physical model chosen to account for PC technique is based on the Fresnel-Kirchhoff diffraction theory. The algorithm consists mainly of two parts. The first one exploits the VXI ray-tracing approach to compute the object transmission function. The second part simulates the PC image due to the wave front distortion introduced by the sample. In the first part, the use of computer-aided drawing (CAD) models enables simulations to be carried out with complex 3D objects. Differently from the VXI original version, which makes use of an object description via triangular facets, the new code requires a more 'sophisticated' object representation based on Non-Uniform Rational B-Splines (NURBS). As a first step we produce a spatial high resolution image by using a point and monochromatic source and an ideal detector. To simulate the polychromatic case, the intensity image is integrated over the considered x-ray energy spectrum. Then, in order to account for the system spatial resolution properties, the high spatial resolution image (mono or polychromatic) is convolved with the total point spread function of the imaging system under consideration. The results supplied by the presented algorithm are examined with the help of some relevant examples. (authors)

  6. Simulating Galaxies and Active Galactic Nuclei in the LSST Image Simulation Effort

    NARCIS (Netherlands)

    Pizagno II, Jim; Ahmad, Z.; Bankert, J.; Bard, D.; Connolly, A.; Chang, C.; Gibson, R. R.; Gilmore, K.; Grace, E.; Hannel, M.; Jernigan, J. G.; Jones, L.; Kahn, S. M.; Krughoff, S. K.; Lorenz, S.; Marshall, S.; Shmakova, S. M.; Sylvestri, N.; Todd, N.; Young, M.

    We present an extragalactic source catalog, which includes galaxies and Active Galactic Nuclei, that is used for the Large Survey Synoptic Telescope Imaging Simulation effort. The galaxies are taken from the De Lucia et. al. (2006) semi-analytic modeling (SAM) of the Millennium Simulation. The LSST

  7. Imaging Simulations for the Korean VLBI Network (KVN

    Directory of Open Access Journals (Sweden)

    Tae-Hyun Jung

    2005-03-01

    Full Text Available The Korean VLBI Network (KVN will open a new field of research in astronomy, geodesy and earth science using the newest three 21m radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was --6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS software package. As the KVN array has a resolution of about 6 mas (milli arcsecond at 22GHz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

  8. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  9. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    Science.gov (United States)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  10. Medical imaging informatics simulators: a tutorial.

    Science.gov (United States)

    Huang, H K; Deshpande, Ruchi; Documet, Jorge; Le, Anh H; Lee, Jasper; Ma, Kevin; Liu, Brent J

    2014-05-01

    A medical imaging informatics infrastructure (MIII) platform is an organized method of selecting tools and synthesizing data from HIS/RIS/PACS/ePR systems with the aim of developing an imaging-based diagnosis or treatment system. Evaluation and analysis of these systems can be made more efficient by designing and implementing imaging informatics simulators. This tutorial introduces the MIII platform and provides the definition of treatment/diagnosis systems, while primarily focusing on the development of the related simulators. A medical imaging informatics (MII) simulator in this context is defined as a system integration of many selected imaging and data components from the MIII platform and clinical treatment protocols, which can be used to simulate patient workflow and data flow starting from diagnostic procedures to the completion of treatment. In these processes, DICOM and HL-7 standards, IHE workflow profiles, and Web-based tools are emphasized. From the information collected in the database of a specific simulator, evidence-based medicine can be hypothesized to choose and integrate optimal clinical decision support components. Other relevant, selected clinical resources in addition to data and tools from the HIS/RIS/PACS and ePRs platform may also be tailored to develop the simulator. These resources can include image content indexing, 3D rendering with visualization, data grid and cloud computing, computer-aided diagnosis (CAD) methods, specialized image-assisted surgical, and radiation therapy technologies. Five simulators will be discussed in this tutorial. The PACS-ePR simulator with image distribution is the cradle of the other simulators. It supplies the necessary PACS-based ingredients and data security for the development of four other simulators: the data grid simulator for molecular imaging, CAD-PACS, radiation therapy simulator, and image-assisted surgery simulator. The purpose and benefits of each simulator with respect to its clinical relevance

  11. Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography

    International Nuclear Information System (INIS)

    Gillam, John E.; Rafecas, Magdalena

    2016-01-01

    Emission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios.

  12. Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gillam, John E. [The University of Sydney, Faculty of Health Sciences and The Brain and Mind Centre, Camperdown (Australia); Rafecas, Magdalena, E-mail: rafecas@imt.uni-luebeck.de [University of Lubeck, Institute of Medical Engineering, Ratzeburger Allee 160, 23538 Lübeck (Germany)

    2016-02-11

    Emission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios.

  13. Fast simulation of ultrasound images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav

    2000-01-01

    , and a whole image can take a full day. Simulating 3D images and 3D flow takes even more time. A 3D image of 64 by 64 lines can take 21 days, which is not practical for iterative work. This paper presents a new fast simulation method based on the Field II program. In imaging the same spatial impulse response...

  14. Simulation of photon and charge transport in X-ray imaging semiconductor sensors

    CERN Document Server

    Nilsson, H E; Hjelm, M; Bertilsson, K

    2002-01-01

    A fully stochastic model for the imaging properties of X-ray silicon pixel detectors is presented. Both integrating and photon counting configurations have been considered, as well as scintillator-coated structures. The model is based on three levels of Monte Carlo simulations; photon transport and absorption using MCNP, full band Monte Carlo simulation of charge transport and system level Monte Carlo simulation of the imaging performance of the detector system. In the case of scintillator-coated detectors, the light scattering in the detector layers has been simulated using a Monte Carlo method. The image resolution was found to be much lower in scintillator-coated systems due to large light spread in thick scintillator layers. A comparison between integrating and photon counting readout methods shows that the image resolution can be slightly enhanced using a photon-counting readout. In addition, the proposed model has been used to study charge-sharing effects on the energy resolution in photon counting dete...

  15. Simulations of multi-contrast x-ray imaging using near-field speckles

    Energy Technology Data Exchange (ETDEWEB)

    Zdora, Marie-Christine [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, 85748 Garching (Germany); Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT (United Kingdom); Thibault, Pierre [Department of Physics & Astronomy, University College London, London, WC1E 6BT (United Kingdom); Herzen, Julia; Pfeiffer, Franz [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, 85748 Garching (Germany); Zanette, Irene [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE (United Kingdom); Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, 85748 Garching (Germany)

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  16. Simulations of the flipping images and microparameters of molecular orientations in liquids according to the molecule string model

    International Nuclear Information System (INIS)

    Wang Li-Na; Zhao Xing-Yu; Zhang Li-Li; Huang Yi-Neng

    2012-01-01

    The relaxation dynamics of liquids is one of the fundamental problems in liquid physics, and it is also one of the key issues to understand the glass transition mechanism. It will undoubtedly provide enlightenment on understanding and calculating the relaxation dynamics if the molecular orientation flipping images and relevant microparameters of liquids are studied. In this paper, we first give five microparameters to describe the individual molecular string (MS) relaxation based on the dynamical Hamiltonian of the MS model, and then simulate the images of individual MS ensemble, and at the same time calculate the parameters of the equilibrium state. The results show that the main molecular orientation flipping image in liquids (including supercooled liquid) is similar to the random walk. In addition, two pairs of the parameters are equal, and one can be ignored compared with the other. This conclusion will effectively reduce the difficulties in calculating the individual MS relaxation based on the single-molecule orientation flipping rate of the general Glauber type, and the computer simulation time of interaction MS relaxation. Moreover, the conclusion is of reference significance for solving and simulating the multi-state MS model. (condensed matter: structural, mechanical, and thermal properties)

  17. Assessment of COTS IR image simulation tools for ATR development

    Science.gov (United States)

    Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal

    2005-05-01

    Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a

  18. Modeling digital breast tomosynthesis imaging systems for optimization studies

    Science.gov (United States)

    Lau, Beverly Amy

    Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a

  19. Software for Simulation of Hyperspectral Images

    Science.gov (United States)

    Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2002-01-01

    A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.

  20. MULTISCALE SPARSE APPEARANCE MODELING AND SIMULATION OF PATHOLOGICAL DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Rami Zewail

    2017-08-01

    Full Text Available Machine learning and statistical modeling techniques has drawn much interest within the medical imaging research community. However, clinically-relevant modeling of anatomical structures continues to be a challenging task. This paper presents a novel method for multiscale sparse appearance modeling in medical images with application to simulation of pathological deformations in X-ray images of human spine. The proposed appearance model benefits from the non-linear approximation power of Contourlets and its ability to capture higher order singularities to achieve a sparse representation while preserving the accuracy of the statistical model. Independent Component Analysis is used to extract statistical independent modes of variations from the sparse Contourlet-based domain. The new model is then used to simulate clinically-relevant pathological deformations in radiographic images.

  1. CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre; Amara, Adam; Bergé, Joel; Gamper, Lukas, E-mail: claudio.bruderer@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zurich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2016-01-20

    Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). We calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.

  2. Simulating Dynamic Stall in a 2D VAWT: Modeling strategy, verification and validation with Particle Image Velocimetry data

    International Nuclear Information System (INIS)

    Ferreira, C J Simao; Bijl, H; Bussel, G van; Kuik, G van

    2007-01-01

    The implementation of wind energy conversion systems in the built environment renewed the interest and the research on Vertical Axis Wind Turbines (VAWT), which in this application present several advantages over Horizontal Axis Wind Turbines (HAWT). The VAWT has an inherent unsteady aerodynamic behavior due to the variation of angle of attack with the angle of rotation, perceived velocity and consequentially Reynolds number. The phenomenon of dynamic stall is then an intrinsic effect of the operation of a Vertical Axis Wind Turbine at low tip speed ratios, having a significant impact in both loads and power. The complexity of the unsteady aerodynamics of the VAWT makes it extremely attractive to be analyzed using Computational Fluid Dynamics (CFD) models, where an approximation of the continuity and momentum equations of the Navier-Stokes equations set is solved. The complexity of the problem and the need for new design approaches for VAWT for the built environment has driven the authors of this work to focus the research of CFD modeling of VAWT on: .comparing the results between commonly used turbulence models: URANS (Spalart-Allmaras and k-ε) and large eddy models (Large Eddy Simulation and Detached Eddy Simulation) .verifying the sensitivity of the model to its grid refinement (space and time), .evaluating the suitability of using Particle Image Velocimetry (PIV) experimental data for model validation. The 2D model created represents the middle section of a single bladed VAWT with infinite aspect ratio. The model simulates the experimental work of flow field measurement using Particle Image Velocimetry by Simao Ferreira et al for a single bladed VAWT. The results show the suitability of the PIV data for the validation of the model, the need for accurate simulation of the large eddies and the sensitivity of the model to grid refinement

  3. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    Science.gov (United States)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  4. Simulation of Hyperspectral Images

    Science.gov (United States)

    Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2004-01-01

    A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.

  5. Computer-simulated images of icosahedral, pentagonal and decagonal clusters of atoms

    International Nuclear Information System (INIS)

    Peng JuLin; Bursill, L.A.

    1989-01-01

    The aim of this work was to assess, by computer-simulation the sensitivity of high-resolution electron microscopy (HREM) images for a set of icosahedral and decagonal clusters, containing 50-400 atoms. An experimental study of both crystalline and quasy-crystalline alloys of A1(Si)Mn is presented, in which carefully-chosen electron optical conditions were established by computer simulation then used to obtain high quality images. It was concluded that while there is a very significant degree of model sensitiveness available, direct inversion from image to structure is not at realistic possibility. A reasonable procedure would be to record experimental images of known complex icosahedral alloys, in a crystalline phase, then use the computer-simulations to identify fingerprint imaging conditions whereby certain structural elements could be identified in images of quasi-crystalline or amorphous specimens. 27 refs., 12 figs., 1 tab

  6. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  7. Parametric uncertainty in optical image modeling

    Science.gov (United States)

    Potzick, James; Marx, Egon; Davidson, Mark

    2006-10-01

    Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.

  8. Cortical imaging on a head template: a simulation study using a resistor mesh model (RMM).

    Science.gov (United States)

    Chauveau, Nicolas; Franceries, Xavier; Aubry, Florent; Celsis, Pierre; Rigaud, Bernard

    2008-09-01

    The T1 head template model used in Statistical Parametric Mapping Version 2000 (SPM2), was segmented into five layers (scalp, skull, CSF, grey and white matter) and implemented in 2 mm voxels. We designed a resistor mesh model (RMM), based on the finite volume method (FVM) to simulate the electrical properties of this head model along the three axes for each voxel. Then, we introduced four dipoles of high eccentricity (about 0.8) in this RMM, separately and simultaneously, to compute the potentials for two sets of conductivities. We used the direct cortical imaging technique (CIT) to recover the simulated dipoles, using 60 or 107 electrodes and with or without addition of Gaussian white noise (GWN). The use of realistic conductivities gave better CIT results than standard conductivities, lowering the blurring effect on scalp potentials and displaying more accurate position areas when CIT was applied to single dipoles. Simultaneous dipoles were less accurately localized, but good qualitative and stable quantitative results were obtained up to 5% noise level for 107 electrodes and up to 10% noise level for 60 electrodes, showing that a compromise must be found to optimize both the number of electrodes and the noise level. With the RMM defined in 2 mm voxels, the standard 128-electrode cap and 5% noise appears to be the upper limit providing reliable source positions when direct CIT is used. The admittance matrix defining the RMM is easy to modify so as to adapt to different conductivities. The next step will be the adaptation of individual real head T2 images to the RMM template and the introduction of anisotropy using diffusion imaging (DI).

  9. Laser bistatic two-dimensional scattering imaging simulation of lambert cone

    Science.gov (United States)

    Gong, Yanjun; Zhu, Chongyue; Wang, Mingjun; Gong, Lei

    2015-11-01

    This paper deals with the laser bistatic two-dimensional scattering imaging simulation of lambert cone. Two-dimensional imaging is called as planar imaging. It can reflect the shape of the target and material properties. Two-dimensional imaging has important significance for target recognition. The expression of bistatic laser scattering intensity of lambert cone is obtained based on laser radar eauqtion. The scattering intensity of a micro-element on the target could be obtained. The intensity is related to local angle of incidence, local angle of scattering and the infinitesimal area on the cone. According to the incident direction of laser, scattering direction and normal of infinitesimal area, the local incidence angle and scattering angle can be calculated. Through surface integration and the introduction of the rectangular function, we can get the intensity of imaging unit on the imaging surface, and then get Lambert cone bistatic laser two-dimensional scattering imaging simulation model. We analyze the effect of distinguishability, incident direction, observed direction and target size on the imaging. From the results, we can see that the scattering imaging simulation results of the lambert cone bistatic laser is correct.

  10. POLARIZATION IMAGING AND SCATTERING MODEL OF CANCEROUS LIVER TISSUES

    Directory of Open Access Journals (Sweden)

    DONGZHI LI

    2013-07-01

    Full Text Available We apply different polarization imaging techniques for cancerous liver tissues, and compare the relative contrasts for difference polarization imaging (DPI, degree of polarization imaging (DOPI and rotating linear polarization imaging (RLPI. Experimental results show that a number of polarization imaging parameters are capable of differentiating cancerous cells in isotropic liver tissues. To analyze the contrast mechanism of the cancer-sensitive polarization imaging parameters, we propose a scattering model containing two types of spherical scatterers and carry on Monte Carlo simulations based on this bi-component model. Both the experimental and Monte Carlo simulated results show that the RLPI technique can provide a good imaging contrast of cancerous tissues. The bi-component scattering model provides a useful tool to analyze the contrast mechanism of polarization imaging of cancerous tissues.

  11. MEGACELL: A nanocrystal model construction software for HRTEM multislice simulation

    International Nuclear Information System (INIS)

    Stroppa, Daniel G.; Righetto, Ricardo D.; Montoro, Luciano A.; Ramirez, Antonio J.

    2011-01-01

    Image simulation has an invaluable importance for the accurate analysis of High Resolution Transmission Electron Microscope (HRTEM) results, especially due to its non-linear image formation mechanism. Because the as-obtained images cannot be interpreted in a straightforward fashion, the retrieval of both qualitative and quantitative information from HRTEM micrographs requires an iterative process including the simulation of a nanocrystal model and its comparison with experimental images. However most of the available image simulation software requires atom-by-atom coordinates as input for the calculations, which can be prohibitive for large finite crystals and/or low-symmetry systems and zone axis orientations. This paper presents an open source citation-ware tool named MEGACELL, which was developed to assist on the construction of nanocrystals models. It allows the user to build nanocrystals with virtually any convex polyhedral geometry and to retrieve its atomic positions either as a plain text file or as an output compatible with EMS (Electron Microscopy Software) input protocol. In addition to the description of this tool features, some construction examples and its application for scientific studies are presented. These studies show MEGACELL as a handy tool, which allows an easier construction of complex nanocrystal models and improves the quantitative information extraction from HRTEM images. -- Highlights: → A software to support the HRTEM image simulation of nanocrystals in actual size. → MEGACELL allows the construction of complex nanocrystals models for multislice image simulation. → Some examples of improved nanocrystalline system characterization are presented, including the analysis of 3D morphology and growth behavior.

  12. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  13. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  14. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  15. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    Science.gov (United States)

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  16. SU-E-J-82: Intra-Fraction Proton Beam-Range Verification with PET Imaging: Feasibility Studies with Monte Carlo Simulations and Statistical Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)

    2014-06-01

    Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is

  17. Optical Imaging and Radiometric Modeling and Simulation

    Science.gov (United States)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge

  18. Application of image simulation in weapon system development

    CSIR Research Space (South Africa)

    Willers, CJ

    2007-09-01

    Full Text Available systems. Index Terms—image simulation, scene modelling, weapon eval- uation, infrared I. INTRODUCTION Simulation is used increasingly to support military system development throughout all the product life cycle phases, from concept analysis... the theoretical models. The signature 0 2 4 6 8 10 12 14 0 0.1 0.2 0.3 0.4 0.5 Wavelength [ m]� Tr a n sm itt an ce Path length = 10 000 m Sub-arctic Summer: 14 C ambient, 75% RH, Navy maritime aerosol, 23 km visibility Very high humidity: 35 C...

  19. GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training

    Science.gov (United States)

    Keelan, Robert; Shimada, Kenji

    2016-01-01

    This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze–thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface. PMID:26818026

  20. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  1. SIMULATION OF SHIP GENERATED TURBULENT AND VORTICAL WAKE IMAGING BY SAR

    Institute of Scientific and Technical Information of China (English)

    Wang Aiming; Zhu Minhui

    2004-01-01

    Synthetic Aperture Radar (SAR) imaging of ocean surface features is studied. The simulation of the turbulent and vortical features generated by a moving ship and SAR imaging of these wakes is carried out. The turbulent wake damping the ocean surface capillary waves may be partially responsible for the suppression of surface waves near the ship track. The vortex pair generating a change in the lateral flow field behind the ship may be partially responsible for an enhancement of the waves near the edges of the smooth area. These hydrodynamic phenomena as well as the changes of radar backscatter generated by turbulence and vortex are simulated.An SAR imaging model is then used on such ocean surface features to provide SAR images.Comparison of two ships' simulated SAR images shows that the wake features are different for various ship parameters.

  2. Modelling of AlAs/GaAs interfacial structures using high-angle annular dark field (HAADF) image simulations.

    Science.gov (United States)

    Robb, Paul D; Finnie, Michael; Craven, Alan J

    2012-07-01

    High angle annular dark field (HAADF) image simulations were performed on a series of AlAs/GaAs interfacial models using the frozen-phonon multislice method. Three general types of models were considered-perfect, vicinal/sawtooth and diffusion. These were chosen to demonstrate how HAADF image measurements are influenced by different interfacial structures in the technologically important III-V semiconductor system. For each model, interfacial sharpness was calculated as a function of depth and compared to aberration-corrected HAADF experiments of two types of AlAs/GaAs interfaces. The results show that the sharpness measured from HAADF imaging changes in a complicated manner with thickness for complex interfacial structures. For vicinal structures, it was revealed that the type of material that the probe projects through first of all has a significant effect on the measured sharpness. An increase in the vicinal angle was also shown to generate a wider interface in the random step model. The Moison diffusion model produced an increase in the interface width with depth which closely matched the experimental results of the AlAs-on-GaAs interface. In contrast, the interface width decreased as a function of depth in the linear diffusion model. Only in the case of the perfect model was it possible to ascertain the underlying structure directly from HAADF image analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor

    Energy Technology Data Exchange (ETDEWEB)

    Yu Junting; Li Binqiao; Yu Pingping; Xu Jiangtao [School of Electronics Information Engineering, Tianjin University, Tianjin 300072 (China); Mou Cun, E-mail: xujiangtao@tju.edu.c [Logistics Management Office, Hebei University of Technology, Tianjin 300130 (China)

    2010-09-15

    Pixel image lag in a 4-T CMOS image sensor is analyzed and simulated in a two-dimensional model. Strategies of reducing image lag are discussed from transfer gate channel threshold voltage doping adjustment, PPD N-type doping dose/implant tilt adjustment and transfer gate operation voltage adjustment for signal electron transfer. With the computer analysis tool ISE-TCAD, simulation results show that minimum image lag can be obtained at a pinned photodiode n-type doping dose of 7.0 x 10{sup 12} cm{sup -2}, an implant tilt of -2{sup 0}, a transfer gate channel doping dose of 3.0 x 10{sup 12} cm{sup -2} and an operation voltage of 3.4 V. The conclusions of this theoretical analysis can be a guideline for pixel design to improve the performance of 4-T CMOS image sensors. (semiconductor devices)

  4. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  5. Simulation of imaging in tapping-mode atomic-force microscopy: a comparison amongst a variety of approaches

    Energy Technology Data Exchange (ETDEWEB)

    Pishkenari, H N; Mahboobi, S H; Meghdari, A, E-mail: mahboobi@sharif.edu [Center of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2011-02-23

    Models capable of accurate simulation of microcantilever dynamics coupled with complex tip-sample interactions are essential for interpretation and prediction of the imaging results in amplitude modulation or tapping-mode atomic-force microscopy (AM-AFM or TM-AFM). In this paper, four approaches based on combinations of lumped and finite element methods for modelling of cantilever dynamics, and van der Waals and molecular dynamics for modelling of tip-sample interactions, are used to simulate the precise imaging by AM-AFM. Based on the simulated imaging and force determination, the efficiency of different modelling schemes is evaluated. This comparison is performed considering their coincidence with the realistic behaviour of AM-AFM in imaging of nanoscale features. In the conducted simulations, a diamond tip is used to scan a C60 molecule absorbed on a graphite substrate. The effects of amplitude set-point, cantilever stiffness and quality factor on the accuracy of different modelling approaches are studied.

  6. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The Application of the Technology of 3D Satellite Cloud Imaging in Virtual Reality Simulation

    Directory of Open Access Journals (Sweden)

    Xiao-fang Xie

    2007-05-01

    Full Text Available Using satellite cloud images to simulate clouds is one of the new visual simulation technologies in Virtual Reality (VR. Taking the original data of satellite cloud images as the source, this paper depicts specifically the technology of 3D satellite cloud imaging through the transforming of coordinates and projection, creating a DEM (Digital Elevation Model of cloud imaging and 3D simulation. A Mercator projection was introduced to create a cloud image DEM, while solutions for geodetic problems were introduced to calculate distances, and the outer-trajectory science of rockets was introduced to obtain the elevation of clouds. For demonstration, we report on a computer program to simulate the 3D satellite cloud images.

  8. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  9. Medical image archive node simulation and architecture

    Science.gov (United States)

    Chiang, Ted T.; Tang, Yau-Kuo

    1996-05-01

    It is a well known fact that managed care and new treatment technologies are revolutionizing the health care provider world. Community Health Information Network and Computer-based Patient Record projects are underway throughout the United States. More and more hospitals are installing digital, `filmless' radiology (and other imagery) systems. They generate a staggering amount of information around the clock. For example, a typical 500-bed hospital might accumulate more than 5 terabytes of image data in a period of 30 years for conventional x-ray images and digital images such as Magnetic Resonance Imaging and Computer Tomography images. With several hospitals contributing to the archive, the storage required will be in the hundreds of terabytes. Systems for reliable, secure, and inexpensive storage and retrieval of digital medical information do not exist today. In this paper, we present a Medical Image Archive and Distribution Service (MIADS) concept. MIADS is a system shared by individual and community hospitals, laboratories, and doctors' offices that need to store and retrieve medical images. Due to the large volume and complexity of the data, as well as the diversified user access requirement, implementation of the MIADS will be a complex procedure. One of the key challenges to implementing a MIADS is to select a cost-effective, scalable system architecture to meet the ingest/retrieval performance requirements. We have performed an in-depth system engineering study, and developed a sophisticated simulation model to address this key challenge. This paper describes the overall system architecture based on our system engineering study and simulation results. In particular, we will emphasize system scalability and upgradability issues. Furthermore, we will discuss our simulation results in detail. The simulations study the ingest/retrieval performance requirements based on different system configurations and architectures for variables such as workload, tape

  10. Featured Image: Simulating Planetary Gaps

    Science.gov (United States)

    Kohler, Susanna

    2017-03-01

    The authors model of howthe above disk would look as we observe it in a scattered-light image. The morphology of the gap can be used to estimate the mass of the planet that caused it. [Dong Fung 2017]The above image from a computer simulation reveals the dust structure of a protoplanetary disk (with the star obscured in the center) as a newly formed planet orbits within it. A recent study by Ruobing Dong (Steward Observatory, University of Arizona) and Jeffrey Fung (University of California, Berkeley) examines how we can determine mass of such a planet based on our observations of the gap that the planet opens in the disk as it orbits. The authors models help us to better understand how our observations of gaps might change if the disk is inclined relative to our line of sight, and how we can still constrain the mass of the gap-opening planet and the viscosity of the disk from the scattered-light images we have recently begun to obtain of distant protoplanetary disks. For more information, check out the paper below!CitationRuobing Dong () and Jeffrey Fung () 2017 ApJ 835 146. doi:10.3847/1538-4357/835/2/146

  11. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  12. Digitalization and networking of analog simulators and portal images.

    Science.gov (United States)

    Pesznyák, Csilla; Zaránd, Pál; Mayer, Arpád

    2007-03-01

    Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R&V (Record & Verify) system is required. Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R&V system.

  13. Simulation of Sentinel-3 images by four stream surface atmosphere radiative transfer modeling in the optical and thermal domains

    NARCIS (Netherlands)

    Verhoef, W.; Bach, H.

    2012-01-01

    Simulation of future satellite images can be applied in order to validate the general mission concept and to test the performance of advanced multi-sensor algorithms for the retrieval of surface parameters. This paper describes the radiative transfer modeling part of a so-called Land Scene Generator

  14. A computational model to generate simulated three-dimensional breast masses

    Energy Technology Data Exchange (ETDEWEB)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N., E-mail: wernick@iit.edu [Medical Imaging Research Center, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois 60616 (United States); Schmidt, Robert A. [Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Nishikawa, Robert M. [Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213 (United States)

    2015-02-15

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  15. A computational model to generate simulated three-dimensional breast masses

    International Nuclear Information System (INIS)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N.; Schmidt, Robert A.; Nishikawa, Robert M.

    2015-01-01

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  16. Image based EFIT simulation for nondestructive ultrasonic testing of austenitic steel

    International Nuclear Information System (INIS)

    Nakahata, Kazuyuki; Hirose, Sohichi; Schubert, Frank; Koehler, Bernd

    2009-01-01

    The ultrasonic testing (UT) of an austenitic steel with welds is difficult due to the acoustic anisotropy and local heterogeneity. The ultrasonic wave in the austenitic steel is skewed along crystallographic directions and scattered by weld boundaries. For reliable UT, a straightforward simulation tool to predict the wave propagation is desired. Here a combined method of elastodynamic finite integration technique (EFIT) and digital image processing is developed as a wave simulation tool for UT. The EFIT is a grid-based explicit numerical method and easily treats different boundary conditions which are essential to model wave propagation in heterogeneous materials. In this study, the EFIT formulation in anisotropic and heterogeneous materials is briefly described and an example of a two dimensional simulation of a phased array UT in an austenitic steel bar is demonstrated. In our simulation, a picture of the surface of the steel bar with a V-groove weld is scanned and fed into the image based EFIT modeling. (author)

  17. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  18. Early orthognathic surgery with three-dimensional image simulation during presurgical orthodontics in adults.

    Science.gov (United States)

    Kang, Sang-Hoon; Kim, Moon-Key; Park, Sun-Yeon; Lee, Ji-Yeon; Park, Wonse; Lee, Sang-Hwy

    2011-03-01

    To correct dentofacial deformities, three-dimensional skeletal analysis and computerized orthognathic surgery simulation are used to facilitate accurate diagnoses and surgical plans. Computed tomography imaging of dental occlusion can inform three-dimensional facial analyses and orthognathic surgical simulations. Furthermore, three-dimensional laser scans of a cast model of the predetermined postoperative dental occlusion can be used to increase the accuracy of the preoperative surgical simulation. In this study, we prepared cast models of planned postoperative dental occlusions from 12 patients diagnosed with skeletal class III malocclusions with mandibular prognathism and facial asymmetry that had planned to undergo bimaxillary orthognathic surgery during preoperative orthodontic treatment. The data from three-dimensional laser scans of the cast models were used in three-dimensional surgical simulations. Early orthognathic surgeries were performed based on three-dimensional image simulations using the cast images in several presurgical orthodontic states in which teeth alignment, leveling, and space closure were incomplete. After postoperative orthodontic treatments, intraoral examinations revealed that no patient had a posterior open bite or space. The two-dimensional and three-dimensional skeletal analyses showed that no mandibular deviations occurred between the immediate and final postoperative states of orthodontic treatment. These results showed that early orthognathic surgery with three-dimensional computerized simulations based on cast models of predetermined postoperative dental occlusions could provide early correction of facial deformities and improved efficacy of preoperative orthodontic treatment. This approach can reduce the decompensation treatment period of the presurgical orthodontics and contribute to efficient postoperative orthodontic treatments.

  19. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... physical phenomena or structure is not always straightforward. The aim of this thesis is to use image simulation to better understand observations from HRTEM images. Surface strain is known to be important for the performance of nanoparticles. Using simulation, we estimate of the precision and accuracy...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  20. A Monte Carlo-based model for simulation of digital chest tomo-synthesis

    International Nuclear Information System (INIS)

    Ullman, G.; Dance, D. R.; Sandborg, M.; Carlsson, G. A.; Svalkvist, A.; Baath, M.

    2010-01-01

    The aim of this work was to calculate synthetic digital chest tomo-synthesis projections using a computer simulation model based on the Monte Carlo method. An anthropomorphic chest phantom was scanned in a computed tomography scanner, segmented and included in the computer model to allow for simulation of realistic high-resolution X-ray images. The input parameters to the model were adapted to correspond to the VolumeRAD chest tomo-synthesis system from GE Healthcare. Sixty tomo-synthesis projections were calculated with projection angles ranging from + 15 to -15 deg. The images from primary photons were calculated using an analytical model of the anti-scatter grid and a pre-calculated detector response function. The contributions from scattered photons were calculated using an in-house Monte Carlo-based model employing a number of variance reduction techniques such as the collision density estimator. Tomographic section images were reconstructed by transferring the simulated projections into the VolumeRAD system. The reconstruction was performed for three types of images using: (i) noise-free primary projections, (ii) primary projections including contributions from scattered photons and (iii) projections as in (ii) with added correlated noise. The simulated section images were compared with corresponding section images from projections taken with the real, anthropomorphic phantom from which the digital voxel phantom was originally created. The present article describes a work in progress aiming towards developing a model intended for optimisation of chest tomo-synthesis, allowing for simulation of both existing and future chest tomo-synthesis systems. (authors)

  1. Application of digital image processing for the generation of voxels phantoms for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, L.S.; Menezes, A.F.; Cardoso, M.A.C. [Programa de Engenharia Nuclear/COPPE (Brazil); Rosa, L.A.R. da [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Batista, D.V.S. [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Instituto Nacional de Cancer-Secao de Fisica Medica, Praca Cruz Vermelha, 23-Centro, 20230-130 Rio de Janeiro, RJ (Brazil); Cardoso, S.C. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade Federal do Rio de Janeiro, Bloco A-Sala 307, CP 68528, CEP 21941-972 Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear/COPPE (Brazil); Departamento de Engenharia Nuclear/Escola Politecnica, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970 Rio de Janeiro, RJ (Brazil); Facure, A. [Comissao Nacional de Energia Nuclear, R. Gal. Severiano 90, sala 409, 22294-900 Rio de Janeiro, RJ (Brazil)

    2012-01-15

    This paper presents the application of a computational methodology for optimizing the conversion of medical tomographic images in voxel anthropomorphic models for simulation of radiation transport using the MCNP code. A computational system was developed for digital image processing that compresses the information from the DICOM medical image before it is converted to the Scan2MCNP software input file for optimization of the image data. In order to validate the computational methodology, a radiosurgery treatment simulation was performed using the Alderson Rando phantom and the acquisition of DICOM images was performed. The simulation results were compared with data obtained with the BrainLab planning system. The comparison showed good agreement for three orthogonal treatment beams of {sup 60}Co gamma radiation. The percentage differences were 3.07%, 0.77% and 6.15% for axial, coronal and sagital projections, respectively. - Highlights: Black-Right-Pointing-Pointer We use a method to optimize the CT image conversion in voxel model for MCNP simulation. Black-Right-Pointing-Pointer We present a methodology to compress a DICOM image before conversion to input file. Black-Right-Pointing-Pointer To validate this study an idealized radiosurgery applied to the Alderson phantom was used.

  2. Multi-scale imaging and elastic simulation of carbonates

    Science.gov (United States)

    Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed

    2016-05-01

    Digital Rock Physics (DRP) is an emerging technology that can be used to generate high quality, fast and cost effective special core analysis (SCAL) properties compared to conventional experimental techniques and modeling techniques. The primary workflow of DRP conssits of three elements: 1) image the rock sample using high resolution 3D scanning techniques (e.g. micro CT, FIB/SEM), 2) process and digitize the images by segmenting the pore and matrix phases 3) simulate the desired physical properties of the rocks such as elastic moduli and velocities of wave propagation. A Finite Element Method based algorithm, that discretizes the basic Hooke's Law equation of linear elasticity and solves it numerically using a fast conjugate gradient solver, developed by Garboczi and Day [1] is used for mechanical and elastic property simulations. This elastic algorithm works directly on the digital images by treating each pixel as an element. The images are assumed to have periodic constant-strain boundary condition. The bulk and shear moduli of the different phases are required inputs. For standard 1.5" diameter cores however the Micro-CT scanning reoslution (around 40 μm) does not reveal smaller micro- and nano- pores beyond the resolution. This results in an unresolved "microporous" phase, the moduli of which is uncertain. Knackstedt et al. [2] assigned effective elastic moduli to the microporous phase based on self-consistent theory (which gives good estimation of velocities for well cemented granular media). Jouini et al. [3] segmented the core plug CT scan image into three phases and assumed that micro porous phase is represented by a sub-extracted micro plug (which too was scanned using Micro-CT). Currently the elastic numerical simulations based on CT-images alone largely overpredict the bulk, shear and Young's modulus when compared to laboratory acoustic tests of the same rocks. For greater accuracy of numerical simulation prediction, better estimates of moduli inputs

  3. Digitalization and networking of analog simulators and portal images

    Energy Technology Data Exchange (ETDEWEB)

    Pesznyak, C.; Zarand, P.; Mayer, A. [Uzsoki Hospital, Budapest (Hungary). Inst. of Oncoradiology

    2007-03-15

    Background: Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R and V (Record and Verify) system is required. Material and Methods: Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Results: Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. Conclusion: The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R and V system. (orig.)

  4. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  5. SAR image classification based on CNN in real and simulation datasets

    Science.gov (United States)

    Peng, Lijiang; Liu, Ming; Liu, Xiaohua; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-04-01

    Convolution neural network (CNN) has made great success in image classification tasks. Even in the field of synthetic aperture radar automatic target recognition (SAR-ATR), state-of-art results has been obtained by learning deep representation of features on the MSTAR benchmark. However, the raw data of MSTAR have shortcomings in training a SAR-ATR model because of high similarity in background among the SAR images of each kind. This indicates that the CNN would learn the hierarchies of features of backgrounds as well as the targets. To validate the influence of the background, some other SAR images datasets have been made which contains the simulation SAR images of 10 manufactured targets such as tank and fighter aircraft, and the backgrounds of simulation SAR images are sampled from the whole original MSTAR data. The simulation datasets contain the dataset that the backgrounds of each kind images correspond to the one kind of backgrounds of MSTAR targets or clutters and the dataset that each image shares the random background of whole MSTAR targets or clutters. In addition, mixed datasets of MSTAR and simulation datasets had been made to use in the experiments. The CNN architecture proposed in this paper are trained on all datasets mentioned above. The experimental results shows that the architecture can get high performances on all datasets even the backgrounds of the images are miscellaneous, which indicates the architecture can learn a good representation of the targets even though the drastic changes on background.

  6. Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan); Himuro, Kazuhiko; Yamashita, Yasuo; Komiya, Isao [Division of Radiology, Department of Medical Technology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan); Baba, Shingo [Department of Clinical Radiology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan)

    2015-04-15

    Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitatively consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually

  7. SimVascular 2.0: an Integrated Open Source Pipeline for Image-Based Cardiovascular Modeling and Simulation

    Science.gov (United States)

    Lan, Hongzhi; Merkow, Jameson; Updegrove, Adam; Schiavazzi, Daniele; Wilson, Nathan; Shadden, Shawn; Marsden, Alison

    2015-11-01

    SimVascular (www.simvascular.org) is currently the only fully open source software package that provides a complete pipeline from medical image based modeling to patient specific blood flow simulation and analysis. It was initially released in 2007 and has contributed to numerous advances in fundamental hemodynamics research, surgical planning, and medical device design. However, early versions had several major barriers preventing wider adoption by new users, large-scale application in clinical and research studies, and educational access. In the past years, SimVascular 2.0 has made significant progress by integrating open source alternatives for the expensive commercial libraries previously required for anatomic modeling, mesh generation and the linear solver. In addition, it simplified the across-platform compilation process, improved the graphical user interface and launched a comprehensive documentation website. Many enhancements and new features have been incorporated for the whole pipeline, such as 3-D segmentation, Boolean operation for discrete triangulated surfaces, and multi-scale coupling for closed loop boundary conditions. In this presentation we will briefly overview the modeling/simulation pipeline and advances of the new SimVascular 2.0.

  8. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  9. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  10. Design and development of a computer based simulator to support learning of radiographic image quality

    Energy Technology Data Exchange (ETDEWEB)

    Costaridou, L; Pitoura, T; Panayiotakis, G; Pallikarakis, N [Department of Medical Physics, School of Medicine, University of Patras, 265 00 Patras (Greece); Hatzis, K [Institute of Biomedical Technology, Ellinos Stratiotou 50A, 264 41 Patras (Greece)

    1994-12-31

    A training simulator has been developed to offer a structured and functional approach to radiographic imaging procedures and comprehensive understanding of interrelations between physical and technical input parameters of a radiographic imaging system and characteristics of image quality. The system addresses training needs of radiographers and radiology clinicians. The simulator is based on procedural simulation enhanced by a hypertextual model of information organization. It is supported by an image data base, which supplies and enriches the simulator. The simulation is controlled by a browsing facility which corresponds to several hierachical levels of use of the underlying multimodal data base, organized as imaging tasks. Representative tasks are : production of a single radiograph or production of functional sets of radiographs exhibiting parameter effects on image characteristics. System parameters such as patient positioning, focus to patient distance, magnification, field dimensions, focal spot size, tube voltage, tube current and exposure time are under user control. (authors). 7 refs, 2 figs.

  11. Design and development of a computer based simulator to support learning of radiographic image quality

    International Nuclear Information System (INIS)

    Costaridou, L.; Pitoura, T.; Panayiotakis, G.; Pallikarakis, N.; Hatzis, K.

    1994-01-01

    A training simulator has been developed to offer a structured and functional approach to radiographic imaging procedures and comprehensive understanding of interrelations between physical and technical input parameters of a radiographic imaging system and characteristics of image quality. The system addresses training needs of radiographers and radiology clinicians. The simulator is based on procedural simulation enhanced by a hypertextual model of information organization. It is supported by an image data base, which supplies and enriches the simulator. The simulation is controlled by a browsing facility which corresponds to several hierachical levels of use of the underlying multimodal data base, organized as imaging tasks. Representative tasks are : production of a single radiograph or production of functional sets of radiographs exhibiting parameter effects on image characteristics. System parameters such as patient positioning, focus to patient distance, magnification, field dimensions, focal spot size, tube voltage, tube current and exposure time are under user control. (authors)

  12. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    Science.gov (United States)

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  13. Virtual X-ray imaging techniques in an immersive casting simulation environment

    International Nuclear Information System (INIS)

    Li, Ning; Kim, Sung-Hee; Suh, Ji-Hyun; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2007-01-01

    A computer code was developed to simulate radiograph of complex casting products in a CAVE TM -like environment. The simulation is based on the deterministic algorithms and ray tracing techniques. The aim of this study is to examine CAD/CAE/CAM models at the design stage, to optimize the design and inspect predicted defective regions with fast speed, good accuracy and small numerical expense. The present work discusses the algorithms for the radiography simulation of CAD/CAM model and proposes algorithmic solutions adapted from ray-box intersection algorithm and octree data structure specifically for radiographic simulation of CAE model. The stereoscopic visualization of full-size of product in the immersive casting simulation environment as well as the virtual X-ray images of castings provides an effective tool for design and evaluation of foundry processes by engineers and metallurgists

  14. Image simulation for HardWare In the Loop simulation in EO domain

    Science.gov (United States)

    Cathala, Thierry; Latger, Jean

    2015-10-01

    Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".

  15. Validation of a power-law noise model for simulating small-scale breast tissue

    International Nuclear Information System (INIS)

    Reiser, I; Edwards, A; Nishikawa, R M

    2013-01-01

    We have validated a small-scale breast tissue model based on power-law noise. A set of 110 patient images served as truth. The statistical model parameters were determined by matching the radially averaged power-spectrum of the projected simulated tissue with that of the central tomosynthesis patient breast projections. Observer performance in a signal-known exactly detection task in simulated and actual breast backgrounds was compared. Observers included human readers, a pre-whitening observer model and a channelized Hotelling observer model. For all observers, good agreement between performance in the simulated and actual backgrounds was found, both in the tomosynthesis central projections and the reconstructed images. This tissue model can be used for breast x-ray imaging system optimization. The complete statistical description of the model is provided. (paper)

  16. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    Science.gov (United States)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  17. IMAGE-BASED RECONSTRUCTION AND ANALYSIS OF DYNAMIC SCENES IN A LANDSLIDE SIMULATION FACILITY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2017-12-01

    Full Text Available The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  18. Joint model of motion and anatomy for PET image reconstruction

    International Nuclear Information System (INIS)

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-01-01

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem

  19. Validation of a simultaneous PET/MR system model for PET simulation using GATE

    International Nuclear Information System (INIS)

    Monnier, Florian; Fayad, Hadi; Bert, Julien; Schmidt, Holger; Visvikis, Dimitris

    2015-01-01

    Simultaneous PET/MR acquisition shows promise in a range of applications. Simulation using GATE is an essential tool that allows obtaining the ground truth for such acquisitions and therefore helping in the development and the validation of innovative processing methods such as PET image reconstruction, attenuation correction and motion correction. The purpose of this work is to validate the GATE simulation of the Siemens Biograph mMR PET/MR system. A model of the Siemens Biograph mMR was developed. This model includes the geometry and spatial positioning of the crystals inside the scanner and the characteristics of the detection process. The accuracy of the model was tested by comparing, on a real physical phantom study, GATE simulated results to reconstructed PET images using measured results obtained from a Siemens Biograph mMR system. The same parameters such as the acquisition time and phantom position inside the scanner were fixed for our simulations. List-mode outputs were recovered in both cases and reconstructed using the OPL-EM algorithm. Several parameters were used to compare the two reconstructed images such as profile comparison, signal-to-noise ratio and activity contrast analysis. Finally patient acquired MR images were segmented and used for the simulation of corresponding PET images. The simulated and acquired sets of reconstructed phantom images showed close emission values in regions of interest with relative differences lower than 5%. The scatter fraction was within a <3% agreement. Close matching of profiles and contrast indices were obtained between simulated and corresponding acquired PET images. Our results indicate that the GATE developed Biograph mMR model is accurate in comparison to the real scanner performance and can be used for evaluating innovative processing methods for applications in clinical PET/MR protocols.

  20. GRMHD Simulations of Visibility Amplitude Variability for Event Horizon Telescope Images of Sgr A*

    Science.gov (United States)

    Medeiros, Lia; Chan, Chi-kwan; Özel, Feryal; Psaltis, Dimitrios; Kim, Junhan; Marrone, Daniel P.; Sa¸dowski, Aleksander

    2018-04-01

    The Event Horizon Telescope will generate horizon scale images of the black hole in the center of the Milky Way, Sgr A*. Image reconstruction using interferometric visibilities rests on the assumption of a stationary image. We explore the limitations of this assumption using high-cadence disk- and jet-dominated GRMHD simulations of Sgr A*. We also employ analytic models that capture the basic characteristics of the images to understand the origin of the variability in the simulated visibility amplitudes. We find that, in all simulations, the visibility amplitudes for baselines oriented parallel and perpendicular to the spin axis of the black hole follow general trends that do not depend strongly on accretion-flow properties. This suggests that fitting Event Horizon Telescope observations with simple geometric models may lead to a reasonably accurate determination of the orientation of the black hole on the plane of the sky. However, in the disk-dominated models, the locations and depths of the minima in the visibility amplitudes are highly variable and are not related simply to the size of the black hole shadow. This suggests that using time-independent models to infer additional black hole parameters, such as the shadow size or the spin magnitude, will be severely affected by the variability of the accretion flow.

  1. Volumetric BOLD fMRI simulation: from neurovascular coupling to multivoxel imaging

    International Nuclear Information System (INIS)

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    The blood oxygenation-level dependent (BOLD) functional magnetic resonance imaging (fMRI) modality has been numerically simulated by calculating single voxel signals. However, the observation on single voxel signals cannot provide information regarding the spatial distribution of the signals. Specifically, a single BOLD voxel signal simulation cannot answer the fundamental question: is the magnetic resonance (MR) image a replica of its underling magnetic susceptibility source? In this paper, we address this problem by proposing a multivoxel volumetric BOLD fMRI simulation model and a susceptibility expression formula for linear neurovascular coupling process, that allow us to examine the BOLD fMRI procedure from neurovascular coupling to MR image formation. Since MRI technology only senses the magnetism property, we represent a linear neurovascular-coupled BOLD state by a magnetic susceptibility expression formula, which accounts for the parameters of cortical vasculature, intravascular blood oxygenation level, and local neuroactivity. Upon the susceptibility expression of a BOLD state, we carry out volumetric BOLD fMRI simulation by calculating the fieldmap (established by susceptibility magnetization) and the complex multivoxel MR image (by intravoxel dephasing). Given the predefined susceptibility source and the calculated complex MR image, we compare the MR magnitude (phase, respectively) image with the predefined susceptibility source (the calculated fieldmap) by spatial correlation. The spatial correlation between the MR magnitude image and the magnetic susceptibility source is about 0.90 for the settings of T E = 30 ms, B 0 = 3 T, voxel size = 100 micron, vessel radius = 3 micron, and blood volume fraction = 2%. Using these parameters value, the spatial correlation between the MR phase image and the susceptibility-induced fieldmap is close to 1.00. Our simulation results show that the MR magnitude image is not an exact replica of the magnetic susceptibility

  2. A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres

    Science.gov (United States)

    Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo

    2015-01-01

    A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.

  3. Simulation of an image network in a medical image information system

    International Nuclear Information System (INIS)

    Massar, A.D.A.; De Valk, J.P.J.; Reijns, G.L.; Bakker, A.R.

    1985-01-01

    The desirability of an integrated (digital) communication system for medical images is widely accepted. In the USA and in Europe several experimental projects are in progress to realize (a part of) such a system. Among these is the IMAGIS project in the Netherlands. From the conclusions of the preliminary studies performed, some requirements can be formulated such a system should meet in order to be accepted by its users. For example, the storage resolution of the images should match the maximum resolution of the presently acquired digital images. This determines the amount of data and therefore the storage requirements. Further, the desired images should be there when needed. This time constraint determines the speed requirements to be imposed on the system. As compared to current standards, very large storage capacities and very fast communication media are needed to meet these requirements. By employing cacheing techniques and suitable data compression schemes for the storage and by carefully choosing the network protocols, bare capacity demands can be alleviated. A communication network is needed to make the imaging system available over a larger area. As the network is very likely to become a major bottleneck for system performance, effects of variation of various attributes have to be carefully studied and analysed. After interesting results had been obtained (although preliminary) using a simulation model for a layered storage structure, it was decided to apply simulation also to this problem. Effects of network topology, access protocols and buffering strategies will be tested. Changes in performance resulting from changes in various network parameters will be studied. Results of this study at its present state are presented

  4. Image formation simulation for computer-aided inspection planning of machine vision systems

    Science.gov (United States)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  5. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    Science.gov (United States)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  6. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  7. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  8. Image simulation using LOCUS

    International Nuclear Information System (INIS)

    Strachan, J.D.; Roberts, J.A.

    1989-09-01

    The LOCUS data base program has been used to simulate images and to solve simple equations. This has been accomplished by making each record (which normally would represent a data entry)represent sequenced or random number pairs

  9. Application of Monte Carlo method in forward simulation of azimuthal gamma imaging while drilling

    International Nuclear Information System (INIS)

    Yuan Chao; Zhou Cancan; Zhang Feng; Chen Zhi

    2014-01-01

    Monte Carlo simulation is one of the most important numerical simulation methods in nuclear logging. Formation models can be conveniently built with MCNP code, which provides a simple and effective approach for fundamental study of nuclear logging. Monte Carlo method is employed to set up formation models under logging while drilling condition, and the characteristic of azimuthal gamma imaging is simulated. The results present that the azimuthal gamma imaging shows a sinusoidal curve features. The imaging can be used to accurately calculate the relative dip angle of borehole and thickness of radioactive formation. The larger relative dip angle of borehole and the thicker radioactive formation lead to the larger height of the sinusoidal curve in the imaging. The borehole size has no affect for the calculation of the relative dip angle, but largely affects the determination of formation thickness. The standoff of logging tool has great influence for the calculation of the relative dip angle and formation thickness. If the gamma ray counts meet the demand of counting statistics in nuclear logging, the effect of borehole fluid on the imaging can be ignored. (authors)

  10. Optical image reconstruction using DC data: simulations and experiments

    International Nuclear Information System (INIS)

    Huabei Jiang; Paulsen, K.D.; Oesterberg, U.L.

    1996-01-01

    In this paper, we explore optical image formation using a diffusion approximation of light propagation in tissue which is modelled with a finite-element method for optically heterogeneous media. We demonstrate successful image reconstruction based on absolute experimental DC data obtained with a continuous wave 633 nm He-Ne laser system and a 751 nm diode laser system in laboratory phantoms having two optically distinct regions. The experimental systems used exploit a tomographic type of data collection scheme that provides information from which a spatially variable optical property map is deduced. Reconstruction of scattering coefficient only and simultaneous reconstruction of both scattering and absorption profiles in tissue-like phantoms are obtained from measured and simulated data. Images with different contrast levels between the heterogeneity and the background are also reported and the results show that although it is possible to obtain qualitative visual information on the location and size of a heterogeneity, it may not be possible to quantitatively resolve contrast levels or optical properties using reconstructions from DC data only. Sensitivity of image reconstruction to noise in the measurement data is investigated through simulations. The application of boundary constraints has also been addressed. (author)

  11. The establishment of Digital Image Capture System(DICS) using conventional simulator

    International Nuclear Information System (INIS)

    Oh, Tae Sung; Park, Jong Il; Byun, Young Sik; Shin, Hyun Kyoh

    2004-01-01

    The simulator is used to determine patient field and ensure the treatment field, which encompasses the required anatomy during patient normal movement such as during breathing. The latest simulator provide real time display of still, fluoroscopic and digitalized image, but conventional simulator is not yet. The purpose of this study is to introduce digital image capture system(DICS) using conventional simulator and clinical case using digital captured still and fluoroscopic image. We connect the video signal cable to the video terminal in the back up of simulator monitor, and connect the video jack to the A/D converter. After connection between the converter jack and computer, We can acquire still image and record fluoroscopic image with operating image capture program. The data created with this system can be used in patient treatment, and modified for verification by using image processing software. (j.e. photoshop, paintshop) DICS was able to establish easy and economical procedure. DCIS image was helpful for simulation. DICS imaging was powerful tool in the evaluation of the department specific patient positioning. Because the commercialized simulator based of digital capture is very expensive, it is not easily to establish DICS simulator in the most hospital. DICS using conventional simulator enable to utilize the practical use of image equal to high cost digitalized simulator and to research many clinical cases in case of using other software program.

  12. Dosimetric control of radiotherapy treatments by Monte Carlo simulation of transmitted portal dose image

    International Nuclear Information System (INIS)

    Badel, Jean-Noel

    2009-01-01

    This research thesis addresses the dosimetric control of radiotherapy treatments by using amorphous silicon digital portal imagery. In a first part, the author reports the analysis of the dosimetric abilities of the imager (iViewGT) which is used in the radiotherapy department. The stability of the imager response on a short and on a long term has been studied. A relationship between the image grey level and the dose has been established for a reference irradiation field. The influence of irradiation parameters on the grey level variation with respect to the dose has been assessed. The obtained results show the possibility to use this system for dosimetry provided that a precise calibration is performed while taking the most influencing irradiation parameters into account, i.e. photon beam nominal energy, field size, and patient thickness. The author reports the development of a Monte Carlo simulation to model the imager response. It models the accelerator head by a generalized source point. Space and energy distributions of photons are calculated. This modelling can also be applied to the calculation of dose distribution within a patient, or to study physical interactions in the accelerator head. Then, the author explores a new approach to dose portal image prediction within the frame of an in vivo dosimetric control. He computes the image transmitted through the patient by Monte Carlo simulation, and measures the portal image of the irradiation field without the patient. Validation experiments are reported, and problems to be solved are highlighted (computation time, improvement of the collimator simulation) [fr

  13. Multispectral simulation environment for modeling low-light-level sensor systems

    Science.gov (United States)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low

  14. Monte Carlo simulation of PET images for injection doseoptimization

    Czech Academy of Sciences Publication Activity Database

    Boldyš, Jiří; Dvořák, Jiří; Skopalová, M.; Bělohlávek, O.

    2013-01-01

    Roč. 29, č. 9 (2013), s. 988-999 ISSN 2040-7939 R&D Projects: GA MŠk 1M0572 Institutional support: RVO:67985556 Keywords : positron emission tomography * Monte Carlo simulation * biological system modeling * image quality Subject RIV: FD - Oncology ; Hematology Impact factor: 1.542, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/boldys-0397175.pdf

  15. Comparison of Model Predictions of Image Quality with Results of Clinical Trials in Chest and Lumbar Spine Screen-film Imaging

    International Nuclear Information System (INIS)

    Sandborg, M.; McVey, G.; Dance, D.R.; Carlsson, G.A.

    2000-01-01

    The ability to predict image quality from known physical and technical parameters is a prerequisite for making successful dose optimisation. In this study, imaging systems have been simulated using a Monte Carlo model of the imaging systems. The model includes a voxelised human anatomy and quantifies image quality in terms of contrast and signal-to-noise ratio for 5-6 anatomical details included in the anatomy. The imaging systems used in clinical trials were simulated and the ranking of the systems by the model and radiologists compared. The model and the results of the trial for chest PA both show that using a high maximum optical density was significantly better than using a low one. The model predicts that a good system is characterised by a large dynamic range and a high contrast of the blood vessels in the retrocardiac area. The ranking by the radiologists and the model agreed for the lumbar spine AP. (author)

  16. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    International Nuclear Information System (INIS)

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu; Ingleby, Harry R.

    2011-01-01

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, single Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since

  17. Simulated Thin-Film Growth and Imaging

    Science.gov (United States)

    Schillaci, Michael

    2001-06-01

    Thin-films have become the cornerstone of the electronics, telecommunications, and broadband markets. A list of potential products includes: computer boards and chips, satellites, cell phones, fuel cells, superconductors, flat panel displays, optical waveguides, building and automotive windows, food and beverage plastic containers, metal foils, pipe plating, vision ware, manufacturing equipment and turbine engines. For all of these reasons a basic understanding of the physical processes involved in both growing and imaging thin-films can provide a wonderful research project for advanced undergraduate and first-year graduate students. After producing rudimentary two- and three-dimensional thin-film models incorporating ballsitic deposition and nearest neighbor Coulomb-type interactions, the QM tunneling equations are used to produce simulated scanning tunneling microscope (SSTM) images of the films. A discussion of computational platforms, languages, and software packages that may be used to accomplish similar results is also given.

  18. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  19. Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere

    Science.gov (United States)

    Fok, Mei-Ching H.

    2011-01-01

    Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.

  20. Synthetic aperture radar imaging simulator for pulse envelope evaluation

    Science.gov (United States)

    Balster, Eric J.; Scarpino, Frank A.; Kordik, Andrew M.; Hill, Kerry L.

    2017-10-01

    A simulator for spotlight synthetic aperture radar (SAR) image formation is presented. The simulator produces radar returns from a virtual radar positioned at an arbitrary distance and altitude. The radar returns are produced from a source image, where the return is a weighted summation of linear frequency-modulated (LFM) pulse signals delayed by the distance of each pixel in the image to the radar. The imagery is resampled into polar format to ensure consistent range profiles to the position of the radar. The SAR simulator provides a capability enabling the objective analysis of formed SAR imagery, comparing it to an original source image. This capability allows for analysis of various SAR signal processing techniques previously determined by impulse response function (IPF) analysis. The results suggest that IPF analysis provides results that may not be directly related to formed SAR image quality. Instead, the SAR simulator uses image quality metrics, such as peak signal-to-noise ratio (PSNR) and structured similarity index (SSIM), for formed SAR image quality analysis. To showcase the capability of the SAR simulator, it is used to investigate the performance of various envelopes applied to LFM pulses. A power-raised cosine window with a power p=0.35 and roll-off factor of β=0.15 is shown to maximize the quality of the formed SAR images by improving PSNR by 0.84 dB and SSIM by 0.06 from images formed utilizing a rectangular pulse, on average.

  1. Image and Dose Simulation in Support of New Imaging Modalities

    International Nuclear Information System (INIS)

    Kuruvilla Verghese

    2002-01-01

    This report summarizes the highlights of the research performed under the 2-year NEER grant from the Department of Energy. The primary outcome of the work was a new Monte Carlo code, MCMIS-DS, for Monte Carlo for Mammography Image Simulation including Differential Sampling. The code was written to generate simulated images and dose distributions from two different new digital x-ray imaging modalities, namely, synchrotron imaging (SI) and a slot geometry digital mammography system called Fisher Senoscan. A differential sampling scheme was added to the code to generate multiple images that included variations in the parameters of the measurement system and the object in a single execution of the code. The code is to serve multiple purposes; (1) to answer questions regarding the contribution of scattered photons to images, (2) for use in design optimization studies, and (3) to do up to second-order perturbation studies to assess the effects of design parameter variations and/or physical parameters of the object (the breast) without having to re-run the code for each set of varied parameters. The accuracy and fidelity of the code were validated by a large variety of benchmark studies using published data and also using experimental results from mammography phantoms on both imaging modalities

  2. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  3. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-01-01

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10 10 and 0.15*10 10 respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the

  4. Modeling laser speckle imaging of perfusion in the skin (Conference Presentation)

    Science.gov (United States)

    Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard

    2016-02-01

    Laser speckle imaging (LSI) enables visualization of relative blood flow and perfusion in the skin. It is frequently applied to monitor treatment of vascular malformations such as port wine stain birthmarks, and measure changes in perfusion due to peripheral vascular disease. We developed a computational Monte Carlo simulation of laser speckle contrast imaging to quantify how tissue optical properties, blood vessel depths and speeds, and tissue perfusion affect speckle contrast values originating from coherent excitation. The simulated tissue geometry consisted of multiple layers to simulate the skin, or incorporated an inclusion such as a vessel or tumor at different depths. Our simulation used a 30x30mm uniform flat light source to optically excite the region of interest in our sample to better mimic wide-field imaging. We used our model to simulate how dynamically scattered photons from a buried blood vessel affect speckle contrast at different lateral distances (0-1mm) away from the vessel, and how these speckle contrast changes vary with depth (0-1mm) and flow speed (0-10mm/s). We applied the model to simulate perfusion in the skin, and observed how different optical properties, such as epidermal melanin concentration (1%-50%) affected speckle contrast. We simulated perfusion during a systolic forearm occlusion and found that contrast decreased by 35% (exposure time = 10ms). Monte Carlo simulations of laser speckle contrast give us a tool to quantify what regions of the skin are probed with laser speckle imaging, and measure how the tissue optical properties and blood flow affect the resulting images.

  5. Advanced modeling in positron emission tomography using Monte Carlo simulations for improving reconstruction and quantification

    International Nuclear Information System (INIS)

    Stute, Simon

    2010-01-01

    Positron Emission Tomography (PET) is a medical imaging technique that plays a major role in oncology, especially using "1"8F-Fluoro-Deoxyglucose. However, PET images suffer from a modest spatial resolution and from high noise. As a result, there is still no consensus on how tumor metabolically active volume and tumor uptake should be characterized. In the meantime, research groups keep producing new methods for such characterizations that need to be assessed. A Monte Carlo simulation based method has been developed to produce simulated PET images of patients suffering from cancer, indistinguishable from clinical images, and for which all parameters are known. The method uses high resolution PET images from patient acquisitions, from which the physiological heterogeneous activity distribution can be modeled. It was shown that the performance of quantification methods on such highly realistic simulated images are significantly lower and more variable than using simple phantom studies. Fourteen different quantification methods were also compared in realistic conditions using a group of such simulated patients. In addition, the proposed method was extended to simulate serial PET scans in the context of patient monitoring, including a modeling of the tumor changes, as well as the variability over time of non-tumoral physiological activity distribution. Monte Carlo simulations were also used to study the detection probability inside the crystals of the tomograph. A model of the crystal response was derived and included in the system matrix involved in tomographic reconstruction. The resulting reconstruction method was compared with other sophisticated methods for modeling the detector response in the image space, proposed in the literature. We demonstrated the superiority of the proposed method over equivalent approaches on simulated data, and illustrated its robustness on clinical data. For a same noise level, it is possible to reconstruct PET images offering a

  6. Meteosat third generation imager: simulation of the flexible combined imager instrument chain

    Science.gov (United States)

    Just, Dieter; Gutiérrez, Rebeca; Roveda, Fausto; Steenbergen, Theo

    2014-10-01

    The Meteosat Third Generation (MTG) Programme is the next generation of European geostationary meteorological systems. The first MTG satellite, MTG-I1, which is scheduled for launch at the end of 2018, will host two imaging instruments: the Flexible Combined Imager (FCI) and the Lightning Imager. The FCI will provide continuation of the SEVIRI imager operations on the current Meteosat Second Generation satellites (MSG), but with an improved spatial, temporal and spectral resolution, not dissimilar to GOES-R (of NASA/NOAA). Unlike SEVIRI on the spinning MSG spacecraft, the FCI will be mounted on a 3-axis stabilised platform and a 2-axis tapered scan will provide a full coverage of the Earth in 10 minute repeat cycles. Alternatively, a rapid scanning mode can cover smaller areas, but with a better temporal resolution of up to 2.5 minutes. In order to assess some of the data acquisition and processing aspects which will apply to the FCI, a simplified end-to-end imaging chain prototype was set up. The simulation prototype consists of four different functional blocks: - A function for the generation of FCI-like references images - An image acquisition simulation function for the FCI Line-of-Sight calculation and swath generation - A processing function that reverses the swath generation process by rectifying the swath data - An evaluation function for assessing the quality of the processed data with respect to the reference images This paper presents an overview of the FCI instrument chain prototype, covering instrument characteristics, reference image generation, image acquisition simulation, and processing aspects. In particular, it provides in detail the description of the generation of references images, highlighting innovative features, but also limitations. This is followed by a description of the image acquisition simulation process, and the rectification and evaluation function. The latter two are described in more detail in a separate paper. Finally, results

  7. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, D. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  8. The Research of Optical Turbulence Model in Underwater Imaging System

    Directory of Open Access Journals (Sweden)

    Liying Sun

    2014-01-01

    Full Text Available In order to research the effect of turbulence on underwater imaging system and image restoration, the underwater turbulence model is simulated by computer fluid dynamics. This model is obtained in different underwater turbulence intensity, which contains the pressure data that influences refractive index distribution. When the pressure value is conversed to refractive index, the refractive index distribution can be received with the refraction formula. In the condition of same turbulent intensity, the distribution of refractive index presents gradient in the whole region, with disorder and mutations in the local region. With the turbulence intensity increase, the holistic variation of the refractive index in the image is larger, and the refractive index change more tempestuously in the local region. All the above are illustrated by the simulation results with he ray tracing method and turbulent refractive index model. According to different turbulence intensity analysis, it is proved that turbulence causes image distortion and increases noise.

  9. Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns †

    Directory of Open Access Journals (Sweden)

    Prerna Singh

    2017-12-01

    Full Text Available Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods as they can be used to generate a variety of speckle noise models under different interpolation and sampling schemes, and can also provide valuable ground truth data for estimating the accuracy of the chosen methods. However, not much work has been done in the area of modeling synthetic ultrasound images, and in simulating speckle noise generation to get images that are as close as possible to real ultrasound images. An important aspect of simulated synthetic ultrasound images is the requirement for extensive quality assessment for ensuring that they have the texture characteristics and gray-tone features of real images. This paper presents texture feature analysis of synthetic ultrasound images using local binary patterns (LBP and demonstrates the usefulness of a set of LBP features for image quality assessment. Experimental results presented in the paper clearly show how these features could provide an accurate quality metric that correlates very well with subjective evaluations performed by clinical experts.

  10. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  11. Validation of a low dose simulation technique for computed tomography images.

    Directory of Open Access Journals (Sweden)

    Daniela Muenzel

    Full Text Available PURPOSE: Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT images from an original higher dose scan. MATERIALS AND METHODS: Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV of a swine were acquired (approved by the regional governmental commission for animal protection. Simulations of CT acquisition with a lower dose (simulated 10-80 mAs were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. RESULTS: Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2% and -0.2% (range -8.2% to 3.2%, respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise and 1.9-13.4 HU (CT values, without significant differences (p>0.05. Subjective observer evaluation of image appearance showed no visually detectable difference. CONCLUSION: Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.

  12. Dynamic 99mTc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging

    Science.gov (United States)

    Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael

    2013-05-01

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  13. A Monte-Carlo simulation framework for joint optimisation of image quality and patient dose in digital paediatric radiography

    International Nuclear Information System (INIS)

    Menser, Bernd; Manke, Dirk; Mentrup, Detlef; Neitzel, Ulrich

    2016-01-01

    In paediatric radiography, according to the as low as reasonably achievable (ALARA) principle, the imaging task should be performed with the lowest possible radiation dose. This paper describes a Monte-Carlo simulation framework for dose optimisation of imaging parameters in digital paediatric radiography. Patient models with high spatial resolution and organ segmentation enable the simultaneous evaluation of image quality and patient dose on the same simulated radiographic examination. The accuracy of the image simulation is analysed by comparing simulated and acquired images of technical phantoms. As a first application example, the framework is applied to optimise tube voltage and pre-filtration in newborn chest radiography. At equal patient dose, the highest CNR is obtained with low-kV settings in combination with copper filtration. (authors)

  14. Integrating Satellite Image Identification and River Routing Simulation into the Groundwater Simulation of Chou-Shui Basin

    Science.gov (United States)

    Yao, Y.; Yang, S.; Chen, Y.; Chang, L.; Chiang, C.; Huang, C.; Chen, J.

    2012-12-01

    Many groundwater simulation models have been developed for Chou-Shui River alluvial fan which is one of the most important groundwater areas in Taiwan. However, the exchange quantity between Chou-Shui River, the major river in this area, and the groundwater system itself is seldom studied. In this study, the exchange is evaluated using a river package (RIV) in the groundwater simulation model, MODFLOW 2000. Several critical parameters and variables used in RIV such as wet area and river level for each cell below the Chou-Shui River are respectively determined by satellite image identification and HEC-RAS simulation. The monthly average of river levels obtained from four stations include Chang-Yun Bridge, Xi-Bin Bridge, Chi-Chiang Bridge and Si-Jou Bridge during 2008 and the river cross-section measured on December 2007 are used in the construction of HEC-RAS model. Four FORMOSAT multispectral satellite images respectively obtained on January 2008, April 2008, July 2008, and November 2008 are used to identify the wet area of Chou-Shui River during different seasons. Integrating the simulation level provided by HEC-RAS and the identification result are used as the assignment of RIV. First, based on the simulation results of HEC-RAS, the water level differences between flooding period and draught period are 1.4 (m) and 2.0 (m) for Xi-Bin Bridge station (downstream) and Chang-Yun Bridge station (upstream) respectively. Second, based on the identified results, the wet areas for four seasons are 24, 24, 40 and 12 (km2) respectively. The variation range of areas in 2008 is huge that the area for winter is just 30% of the area for summer. Third, based on the simulation of MODFLOW 2000 and RIV, the exchange between the river and the groundwater system is 414 million cubic meters which contains 526 for recharge to river and 112 for discharging from river during 2008. The total recharge includes river exchange and recharge from non-river area is 2023 million cubic meters. The

  15. An Example-Based Brain MRI Simulation Framework.

    Science.gov (United States)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L

    2015-02-21

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  16. Modelling, simulation and visualisation for electromagnetic non-destructive testing

    International Nuclear Information System (INIS)

    Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah

    2010-01-01

    This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)

  17. A framework for simulating ultrasound imaging based on first order nonlinear pressure–velocity relations

    DEFF Research Database (Denmark)

    Du, Yigang; Fan, Rui; Li, Yong

    2016-01-01

    An ultrasound imaging framework modeled with the first order nonlinear pressure–velocity relations (NPVR) based simulation and implemented by a half-time staggered solution and pseudospectral method is presented in this paper. The framework is capable of simulating linear and nonlinear ultrasound...... propagation and reflections in a heterogeneous medium with different sound speeds and densities. It can be initialized with arbitrary focus, excitation and apodization for multiple individual channels in both 2D and 3D spatial fields. The simulated channel data can be generated using this framework......, and ultrasound image can be obtained by beamforming the simulated channel data. Various results simulated by different algorithms are illustrated for comparisons. The root mean square (RMS) errors for each compared pulses are calculated. The linear propagation is validated by an angular spectrum approach (ASA...

  18. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    Science.gov (United States)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from

  19. [Bone drilling simulation by three-dimensional imaging].

    Science.gov (United States)

    Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M

    1989-06-01

    The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.

  20. Fast and Automatic Ultrasound Simulation from CT Images

    Directory of Open Access Journals (Sweden)

    Weijian Cong

    2013-01-01

    Full Text Available Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI. Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of ultrasound from a CT image. A multiscale method is developed to enhance tubular structures so as to simulate the blood flow. The acoustic response of common tissues is generated by weighted integration of adjacent regions on the ultrasound propagation path in the CT image, from which parameters, including attenuation, reflection, scattering, and noise, are estimated simultaneously. The thin-plate spline interpolation method is employed to transform the simulation image between polar and rectangular coordinate systems. The Kaiser window function is utilized to produce integration and radial blurring effects of multiple transducer elements. Experimental results show that the developed method is very fast and effective, allowing realistic ultrasound to be fast generated. Given that the developed method is fully automatic, it can be utilized for ultrasound guided navigation in clinical practice and for training purpose.

  1. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    Science.gov (United States)

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  2. A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.

    Science.gov (United States)

    Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul

    2015-09-01

    An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.

  3. Three-Dimensional Neutral Transport Simulations of Gas Puff Imaging Experiments

    International Nuclear Information System (INIS)

    Stotler, D.P.; DIppolito, D.A.; LeBlanc, B.; Maqueda, R.J.; Myra, J.R.; Sabbagh, S.A.; Zweben, S.J.

    2003-01-01

    Gas Puff Imaging (GPI) experiments are designed to isolate the structure of plasma turbulence in the plane perpendicular to the magnetic field. Three-dimensional aspects of this diagnostic technique as used on the National Spherical Torus eXperiment (NSTX) are examined via Monte Carlo neutral transport simulations. The radial width of the simulated GPI images are in rough agreement with observations. However, the simulated emission clouds are angled approximately 15 degrees with respect to the experimental images. The simulations indicate that the finite extent of the gas puff along the viewing direction does not significantly degrade the radial resolution of the diagnostic. These simulations also yield effective neutral density data that can be used in an approximate attempt to infer two-dimensional electron density and temperature profiles from the experimental images

  4. MODIS-derived daily PAR simulation from cloud-free images and its validation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Liangfu; Gu, Xingfa; Tian, Guoliang [State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University, Beijing 100101 (China); The Center for National Spaceborne Demonstration, Beijing 100101 (China); Gao, Yanhua [State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University, Beijing 100101 (China); Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101 (China); Yang, Lei [State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University, Beijing 100101 (China); Jilin University, Changchun 130026 (China); Liu, Qinhuo [State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University, Beijing 100101 (China)

    2008-06-15

    In this paper, a MODIS-derived daily PAR (photosynthetically active radiation) simulation model from cloud-free image over land surface has been developed based on Bird and Riordan's model. In this model, the total downwelling spectral surface irradiance is divided into two parts: one is beam irradiance, and another is diffuse irradiance. The attenuation of solar beam irradiance comprises scattering by the gas mixture, absorption by ozone, the gas mixture and water vapor, and scattering and absorption by aerosols. The diffuse irradiance is scattered out of the direct beam and towards the surface. The multiple ground-air interactions have been taken into account in the diffuse irradiance model. The parameters needed in this model are atmospheric water vapor content, aerosol optical thickness and spectral albedo ranging from 400 nm to 700 nm. They are all retrieved from MODIS data. Then, the instantaneous photosynthetically available radiation (IPAR) is integrated by using a weighted sum at each of the visible MODIS wavebands. Finally, a daily PAR is derived by integration of IPAR. In order to validate the MODIS-derived PAR model, we compared the field PAR measurements in 2003 and 2004 against the simulated PAR. The measurements were made at the Qianyanzhou ecological experimental station, Chinese Ecosystem Research Network. A total of 54 days of cloud-free MODIS L1B level images were used for the PAR simulation. Our results show that the simulated PAR is consistent with field measurements, where the correlation coefficient of linear regression between calculated PAR and measured PAR is 0.93396. However, there were some uncertainties in the comparison of 1 km pixel PAR with the tower flux stand measurement. (author)

  5. X-ray strain tensor imaging: FEM simulation and experiments with a micro-CT.

    Science.gov (United States)

    Kim, Jae G; Park, So E; Lee, Soo Y

    2014-01-01

    In tissue elasticity imaging, measuring the strain tensor components is necessary to solve the inverse problem. However, it is impractical to measure all the tensor components in ultrasound or MRI elastography because of their anisotropic spatial resolution. The objective of this study is to compute 3D strain tensor maps from the 3D CT images of a tissue-mimicking phantom. We took 3D micro-CT images of the phantom twice with applying two different mechanical compressions to it. Applying the 3D image correlation technique to the CT images under different compression, we computed 3D displacement vectors and strain tensors at every pixel. To evaluate the accuracy of the strain tensor maps, we made a 3D FEM model of the phantom, and we computed strain tensor maps through FEM simulation. Experimentally obtained strain tensor maps showed similar patterns to the FEM-simulated ones in visual inspection. The correlation between the strain tensor maps obtained from the experiment and the FEM simulation ranges from 0.03 to 0.93. Even though the strain tensor maps suffer from high level noise, we expect the x-ray strain tensor imaging may find some biomedical applications such as malignant tissue characterization and stress analysis inside the tissues.

  6. Correlation of breast image alignment using biomechanical modelling

    Science.gov (United States)

    Lee, Angela; Rajagopal, Vijay; Bier, Peter; Nielsen, Poul M. F.; Nash, Martyn P.

    2009-02-01

    Breast cancer is one of the most common causes of cancer death among women around the world. Researchers have found that a combination of imaging modalities (such as x-ray mammography, magnetic resonance, and ultrasound) leads to more effective diagnosis and management of breast cancers because each imaging modality displays different information about the breast tissues. In order to aid clinicians in interpreting the breast images from different modalities, we have developed a computational framework for generating individual-specific, 3D, finite element (FE) models of the breast. Medical images are embedded into this model, which is subsequently used to simulate the large deformations that the breasts undergo during different imaging procedures, thus warping the medical images to the deformed views of the breast in the different modalities. In this way, medical images of the breast taken in different geometric configurations (compression, gravity, etc.) can be aligned according to physically feasible transformations. In order to analyse the accuracy of the biomechanical model predictions, squared normalised cross correlation (NCC2) was used to provide both local and global comparisons of the model-warped images with clinical images of the breast subject to different gravity loaded states. The local comparison results were helpful in indicating the areas for improvement in the biomechanical model. To improve the modelling accuracy, we will need to investigate the incorporation of breast tissue heterogeneity into the model and altering the boundary conditions for the breast model. A biomechanical image registration tool of this kind will help radiologists to provide more reliable diagnosis and localisation of breast cancer.

  7. A Simulation Model Of A Picture Archival And Communication System

    Science.gov (United States)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  8. Improving SAR Automatic Target Recognition Models with Transfer Learning from Simulated Data

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Kusk, Anders; Dall, Jørgen

    2017-01-01

    SAR images. The simulated data set is obtained by adding a simulated object radar reflectivity to a terrain model of individual point scatters, prior to focusing. Our results show that a Convolutional Neural Network (Convnet) pretrained on simulated data has a great advantage over a Convnet trained...

  9. A computer simulation study comparing lesion detection accuracy with digital mammography, breast tomosynthesis, and cone-beam CT breast imaging

    International Nuclear Information System (INIS)

    Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta

    2006-01-01

    Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A z ) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically

  10. Development of computational small animal models and their applications in preclinical imaging and therapy research.

    Science.gov (United States)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  11. Development of computational small animal models and their applications in preclinical imaging and therapy research

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Tianwu [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Geneva Neuroscience Center, Geneva University, Geneva CH-1205 (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB (Netherlands)

    2016-01-15

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  12. Development of computational small animal models and their applications in preclinical imaging and therapy research

    International Nuclear Information System (INIS)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future

  13. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    International Nuclear Information System (INIS)

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon’s ray-tracer, we propose another more simplified geometrical projector based on the Bresenham’s ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model. (paper)

  14. Measurement with microscopic MRI and simulation of flow in different aneurysm models

    Energy Technology Data Exchange (ETDEWEB)

    Edelhoff, Daniel, E-mail: daniel.edelhoff@tu-dortmund.de; Frank, Frauke; Heil, Marvin; Suter, Dieter [Experimental Physics III, TU Dortmund University, Otto-Hahn-Street 4, Dortmund 44227 (Germany); Walczak, Lars; Weichert, Frank [Computer Science VII, TU Dortmund University, Otto-Hahn-Street 16, Dortmund 44227 (Germany); Schmitz, Inge [Institute for Pathology, Ruhr Universität Bochum, Bürkle-de-la-Camp-Platz 1, Bochum 44789 (Germany)

    2015-10-15

    Purpose: The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Methods: Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin–lattice relaxation. Results: The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. Conclusions: The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The

  15. Lévy-based modelling in brain imaging

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Rønn-Nielsen, Anders; Mouridsen, Kim

    2013-01-01

    example of magnetic resonance imaging scans that are non-Gaussian. For these data, simulations under the fitted models show that traditional methods based on Gaussian random field theory may leave small, but significant changes in signal level undetected, while these changes are detectable under a non...

  16. THE POWER OF IMAGING: CONSTRAINING THE PLASMA PROPERTIES OF GRMHD SIMULATIONS USING EHT OBSERVATIONS OF Sgr A*

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal [Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Narayan, Ramesh [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Sadowski, Aleksander, E-mail: chanc@email.arizona.edu [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)

    2015-01-20

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  17. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  18. WE-DE-BRA-05: Monte Carlo Simulation of a Novel Multi-Layer MV Imager

    Energy Technology Data Exchange (ETDEWEB)

    Myronakis, M; Rottmann, J; Berbeco, R [Brigham and Women’s Hospital, Boston, MA (United States); Hu, Y [Dana Farber Cancer Institute, Boston, MA (United States); Wang, A; Shedlock, D; Star-Lack, J [Varian Medical Systems, Palo Alto, CA (United States); Morf, D [Varian Medical Systems, Dattwil, Aargau (Switzerland)

    2016-06-15

    Purpose: To develop and validate a Monte Carlo (MC) model of a novel multi-layer imager (MLI) for megavolt (MV) energy beams. The MC model will enable performance optimization of the MLI design for clinical applications including patient setup verification, tumor tracking and MVCBCT. Methods: The MLI is composed of four layers of converter, scintillator and light detector, each layer similar to the current clinical AS1200 detector (Varian Medical Systems, Inc). The MLI model was constructed using the Geant4 Application for Tomographic Emission (GATE v7.1) and includes interactions for x-ray photons, charged particles and optical photons. Computational experiments were performed to assess Modulation Transfer Function (MTF), Detective Quantum Efficiency (DQE) and Noise Power Spectrum normalized by photon fluence and average detector signal (qNNPS). Results were compared with experimental measurements. The current work incorporates, in one model, the complete chain of events occurring in the imager; i.e. starting from x-ray interaction to charged particle transport and energy deposition to subsequent generation, interactions and detection of optical photons. Results: There is good agreement between measured and simulated MTF, qNNPS and DQE values. Normalized root mean squared error (NRMSE) between measured and simulated values over all four layers was 2.18%, 2.43% and 6.05% for MTF, qNNPS and DQE respectively. The relative difference between simulated and measured values for qNNPS(0) was 1.68% and 1.57% for DQE(0). Current results were obtained using a 6MV Varian Truebeam™ spectrum. Conclusion: A comprehensive Monte Carlo model of the MLI prototype was developed to allow optimization of detector components. The model was assessed in terms of imaging performance using standard metrics (i.e. MTF, qNNPS, DQE). Good agreement was found between simulated and measured values. The model will be used to assess alternative detector constructions to facilitate advanced

  19. Dose-image quality study in digital chest radiography using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Correa, S.C.A.; Souza, E.M.; Silva, A.X.; Lopes, R.T.; Yoriyaz, H.

    2008-01-01

    One of the main preoccupations of diagnostic radiology is to guarantee a good image-sparing dose to the patient. In the present study, Monte Carlo simulations, with MCNPX code, coupled with an adult voxel female model (FAX) were performed to investigate how image quality and dose in digital chest radiography vary with tube voltage (80-150 kV) using air-gap technique and a computed radiography system. Calculated quantities were normalized to a fixed value of entrance skin exposure (ESE) of 0.0136 R. The results of the present analysis show that the image quality for chest radiography with imaging plate is improved and the dose reduced at lower tube voltage

  20. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    International Nuclear Information System (INIS)

    Bottigli, U.; Brunetti, A.; Golosio, B.; Oliva, P.; Stumbo, S.; Vincze, L.; Randaccio, P.; Bleuet, P.; Simionovici, A.; Somogyi, A.

    2004-01-01

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed

  1. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bottigli, U. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Sezione INFN di Cagliari (Italy); Brunetti, A. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Golosio, B. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy) and Sezione INFN di Cagliari (Italy)]. E-mail: golosio@uniss.it; Oliva, P. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Stumbo, S. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Vincze, L. [Department of Chemistry, University of Antwerp (Belgium); Randaccio, P. [Dipartimento di Fisica dell' Universita di Cagliari and Sezione INFN di Cagliari (Italy); Bleuet, P. [European Synchrotron Radiation Facility, Grenoble (France); Simionovici, A. [European Synchrotron Radiation Facility, Grenoble (France); Somogyi, A. [European Synchrotron Radiation Facility, Grenoble (France)

    2004-10-08

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed.

  2. Simulated annealing in adaptive optics for imaging the eye retina

    International Nuclear Information System (INIS)

    Zommer, S.; Adler, J.; Lipson, S. G.; Ribak, E.

    2004-01-01

    Full Text:Adaptive optics is a method designed to correct deformed images in real time. Once the distorted wavefront is known, a deformable mirror is used to compensate the aberrations and return the wavefront to a plane wave. This study concentrates on methods that omit wave front sensing from the reconstruction process. Such methods use stochastic algorithms to find the extremum of a certain sharpness function, thereby correcting the image without any information on the wavefront. Theoretical work [l] has shown that the optical problem can be mapped onto a model for crystal roughening. The main algorithm applied is simulated annealing. We present a first hardware realization of this algorithm in an adaptive optics system designed to image the retina of the human eye

  3. Simulation of image detectors in radiology for determination of scatter-to-primary ratios using Monte Carlo radiation transport code MCNP/MCNPX.

    Science.gov (United States)

    Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde

    2010-05-01

    The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.

  4. Comparison of 3D Echocardiogram-Derived 3D Printed Valve Models to Molded Models for Simulated Repair of Pediatric Atrioventricular Valves.

    Science.gov (United States)

    Scanlan, Adam B; Nguyen, Alex V; Ilina, Anna; Lasso, Andras; Cripe, Linnea; Jegatheeswaran, Anusha; Silvestro, Elizabeth; McGowan, Francis X; Mascio, Christopher E; Fuller, Stephanie; Spray, Thomas L; Cohen, Meryl S; Fichtinger, Gabor; Jolley, Matthew A

    2018-03-01

    Mastering the technical skills required to perform pediatric cardiac valve surgery is challenging in part due to limited opportunity for practice. Transformation of 3D echocardiographic (echo) images of congenitally abnormal heart valves to realistic physical models could allow patient-specific simulation of surgical valve repair. We compared materials, processes, and costs for 3D printing and molding of patient-specific models for visualization and surgical simulation of congenitally abnormal heart valves. Pediatric atrioventricular valves (mitral, tricuspid, and common atrioventricular valve) were modeled from transthoracic 3D echo images using semi-automated methods implemented as custom modules in 3D Slicer. Valve models were then both 3D printed in soft materials and molded in silicone using 3D printed "negative" molds. Using pre-defined assessment criteria, valve models were evaluated by congenital cardiac surgeons to determine suitability for simulation. Surgeon assessment indicated that the molded valves had superior material properties for the purposes of simulation compared to directly printed valves (p 3D echo-derived molded valves are a step toward realistic simulation of complex valve repairs but require more time and labor to create than directly printed models. Patient-specific simulation of valve repair in children using such models may be useful for surgical training and simulation of complex congenital cases.

  5. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    Science.gov (United States)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  6. Simulation of bonding effects in HRTEM images of light element materials

    Directory of Open Access Journals (Sweden)

    Simon Kurasch

    2011-07-01

    Full Text Available The accuracy of multislice high-resolution transmission electron microscopy (HRTEM simulation can be improved by calculating the scattering potential using density functional theory (DFT. This approach accounts for the fact that electrons in the specimen are redistributed according to their local chemical environment. This influences the scattering process and alters the absolute and relative contrast in the final image. For light element materials with well defined geometry, such as graphene and hexagonal boron nitride monolayers, the DFT based simulation scheme turned out to be necessary to prevent misinterpretation of weak signals, such as the identification of nitrogen substitutions in a graphene network. Furthermore, this implies that the HRTEM image does not only contain structural information (atom positions and atomic numbers. Instead, information on the electron charge distribution can be gained in addition.In order to produce meaningful results, the new input parameters need to be chosen carefully. Here we present details of the simulation process and discuss the influence of the main parameters on the final result. Furthermore we apply the simulation scheme to three model systems: A single atom boron and a single atom oxygen substitution in graphene and an oxygen adatom on graphene.

  7. Simulation of Thermal Processes in Metamaterial MM-to-IR Converter for MM-wave Imager

    International Nuclear Information System (INIS)

    Zagubisalo, Peter S; Paulish, Andrey G; Kuznetsov, Sergey A

    2014-01-01

    The main characteristics of MM-wave image detector were simulated by means of accurate numerical modelling of thermophysical processes in a metamaterial MM-to-IR converter. The converter represents a multilayer structure consisting of an ultra thin resonant metamaterial absorber and a perfect emissive layer. The absorber consists of a dielectric self-supporting film that is metallized from both sides. A micro-pattern is fabricated from one side. Resonant absorption of the MM waves induces the converter heating that yields enhancement of IR emission from the emissive layer. IR emission is detected by IR camera. In this contribution an accurate numerical model for simulation of the thermal processes in the converter structure was created by using COMSOL Multiphysics software. The simulation results are in a good agreement with experimental results that validates the model. The simulation shows that the real time operation is provided for the converter thickness less than 3 micrometers and time response can be improved by decreasing of the converter thickness. The energy conversion efficiency of MM waves into IR radiation is over 80%. The converter temperature increase is a linear function of a MM-wave radiation power within three orders of the dynamic range. The blooming effect and ways of its reducing are also discussed. The model allows us to choose the ways of converter structure optimization and improvement of image detector parameters

  8. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    OpenAIRE

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in bot...

  9. Computer simulation of radiographic images sharpness in several system of image record

    International Nuclear Information System (INIS)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A.; Oliveira, Henrique J.Q. de; Alves, Fatima F.R.; Medeiros, Regina B.

    1996-01-01

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure

  10. Modelling of classical ghost images obtained using scattered light

    International Nuclear Information System (INIS)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A

    2007-01-01

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres

  11. Modelling of classical ghost images obtained using scattered light

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A [School of Physics, University of Melbourne, Victoria, 3010 (Australia)

    2007-08-15

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres.

  12. Modelling of microcracks image treated with fluorescent dye

    Science.gov (United States)

    Glebov, Victor; Lashmanov, Oleg U.

    2015-06-01

    The main reasons of catastrophes and accidents are high level of wear of equipment and violation of the production technology. The methods of nondestructive testing are designed to find out defects timely and to prevent break down of aggregates. These methods allow determining compliance of object parameters with technical requirements without destroying it. This work will discuss dye penetrant inspection or liquid penetrant inspection (DPI or LPI) methods and computer model of microcracks image treated with fluorescent dye. Usually cracks on image look like broken extended lines with small width (about 1 to 10 pixels) and ragged edges. The used method of inspection allows to detect microcracks with depth about 10 or more micrometers. During the work the mathematical model of image of randomly located microcracks treated with fluorescent dye was created in MATLAB environment. Background noises and distortions introduced by the optical systems are considered in the model. The factors that have influence on the image are listed below: 1. Background noise. Background noise is caused by the bright light from external sources and it reduces contrast on the objects edges. 2. Noises on the image sensor. Digital noise manifests itself in the form of randomly located points that are differing in their brightness and color. 3. Distortions caused by aberrations of optical system. After passing through the real optical system the homocentricity of the bundle of rays is violated or homocentricity remains but rays intersect at the point that doesn't coincide with the point of the ideal image. The stronger the influence of the above-listed factors, the worse the image quality and therefore the analysis of the image for control of the item finds difficulty. The mathematical model is created using the following algorithm: at the beginning the number of cracks that will be modeled is entered from keyboard. Then the point with random position is choosing on the matrix whose size is

  13. Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography

    Science.gov (United States)

    Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph

    2014-03-01

    In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.

  14. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    Science.gov (United States)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  15. Algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images

    International Nuclear Information System (INIS)

    Ogino, Takashi; Egawa, Sunao

    1991-01-01

    New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)

  16. An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Wenke Huang

    2016-01-01

    Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.

  17. Data simulation for the Associated Particle Imaging system

    International Nuclear Information System (INIS)

    Tunnell, L.N.

    1994-01-01

    A data simulation procedure for the Associated Particle Imaging (API) system has been developed by postprocessing output from the Monte Carlo Neutron Photon (MCNP) code. This paper compares the simulated results to our experimental data

  18. Operative simulation of anterior clinoidectomy using a rapid prototyping model molded by a three-dimensional printer.

    Science.gov (United States)

    Okonogi, Shinichi; Kondo, Kosuke; Harada, Naoyuki; Masuda, Hiroyuki; Nemoto, Masaaki; Sugo, Nobuo

    2017-09-01

    As the anatomical three-dimensional (3D) positional relationship around the anterior clinoid process (ACP) is complex, experience of many surgeries is necessary to understand anterior clinoidectomy (AC). We prepared a 3D synthetic image from computed tomographic angiography (CTA) and magnetic resonance imaging (MRI) data and a rapid prototyping (RP) model from the imaging data using a 3D printer. The objective of this study was to evaluate anatomical reproduction of the 3D synthetic image and intraosseous region after AC in the RP model. In addition, the usefulness of the RP model for operative simulation was investigated. The subjects were 51 patients who were examined by CTA and MRI before surgery. The size of the ACP, thickness and length of the optic nerve and artery, and intraosseous length after AC were measured in the 3D synthetic image and RP model, and reproducibility in the RP model was evaluated. In addition, 10 neurosurgeons performed AC in the completed RP models to investigate their usefulness for operative simulation. The RP model reproduced the region in the vicinity of the ACP in the 3D synthetic image, including the intraosseous region, at a high accuracy. In addition, drilling of the RP model was a useful operative simulation method of AC. The RP model of the vicinity of ACP, prepared using a 3D printer, showed favorable anatomical reproducibility, including reproduction of the intraosseous region. In addition, it was concluded that this RP model is useful as a surgical education tool for drilling.

  19. Correlation of simulated TEM images with irradiation induced damage

    International Nuclear Information System (INIS)

    Schaeublin, R.; Almeida, P. de; Almazouzi, A.; Victoria, M.

    2000-01-01

    Crystal damage induced by irradiation is investigated using transmission electron microscopy (TEM) coupled to molecular dynamics (MD) calculations. The displacement cascades are simulated for energies ranging from 10 to 50 keV in Al, Ni and Cu and for times of up to a few tens of picoseconds. Samples are then used to perform simulations of the TEM images that one could observe experimentally. Diffraction contrast is simulated using a method based on the multislice technique. It appears that the cascade induced damage in Al imaged in weak beam exhibits little contrast, which is too low to be experimentally visible, while in Ni and Cu a good contrast is observed. The number of visible clusters is always lower than the actual one. Conversely, high resolution TEM (HRTEM) imaging allows most of the defects contained in the sample to be observed, although experimental difficulties arise due to the low contrast intensity of the smallest defects. Single point defects give rise in HTREM to a contrast that is similar to that of cavities. TEM imaging of the defects is discussed in relation to the actual size of the defects and to the number of clusters deduced from MD simulations

  20. Monte Carlo modeling of neutron and gamma-ray imaging systems

    International Nuclear Information System (INIS)

    Hall, J.

    1996-04-01

    Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ''real world'' complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification

  1. Application of Simulated Three Dimensional CT Image in Orthognathic Surgery

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Don; Park, Chang Seo [Dept. of Dental Radiology, College of Dentistry, Yensei University, Seoul (Korea, Republic of); Yoo, Sun Kook; Lee, Kyoung Sang [Dept. of Medical Engineering, College of Medicine, Yensei University, Seoul (Korea, Republic of)

    1998-08-15

    In orthodontics and orthognathic surgery, cephalogram has been routine practice in diagnosis and treatment evaluation of craniofacial deformity. But its inherent distortion of actual length and angles during projecting three dimensional object to two dimensional plane might cause errors in quantitative analysis of shape and size. Therefore, it is desirable that three dimensional object is diagnosed and evaluated three dimensionally and three dimensional CT image is best for three dimensional analysis. Development of clinic necessitates evaluation of result of treatment and comparison before and after surgery. It is desirable that patient that was diagnosed and planned by three dimensional computed tomography before surgery is evaluated by three dimensional computed tomography after surgery, too. But Because there is no standardized normal values in three dimension now and three dimensional Computed Tomography needs expensive equipment and because of its expenses and amount of exposure to radiation, limitations still remain to be solved in its application to routine practice. If postoperative three dimensional image is constructed by pre and postoperative lateral and postero-anterior cephalograms and preoperative three dimensional computed tomogram, pre and postoperative image will be compared and evaluated three dimensionally without three dimensional computed tomography after surgery and that will contribute to standardize normal values in three dimension. This study introduced new method that computer-simulated three dimensional image was constructed by preoperative three dimensional computed tomogram and pre and postoperative lateral and postero-anterior cephalograms, and for validation of new method, in four cases of dry skull that position of mandible was displaced and four patients of orthognathic surgery, computer-simulated three dimensional image and actual postoperative three dimensional image were compared. The results were as follows. 1. In four cases of

  2. Application of Simulated Three Dimensional CT Image in Orthognathic Surgery

    International Nuclear Information System (INIS)

    Kim, Hyun Don; Park, Chang Seo; Yoo, Sun Kook; Lee, Kyoung Sang

    1998-01-01

    In orthodontics and orthognathic surgery, cephalogram has been routine practice in diagnosis and treatment evaluation of craniofacial deformity. But its inherent distortion of actual length and angles during projecting three dimensional object to two dimensional plane might cause errors in quantitative analysis of shape and size. Therefore, it is desirable that three dimensional object is diagnosed and evaluated three dimensionally and three dimensional CT image is best for three dimensional analysis. Development of clinic necessitates evaluation of result of treatment and comparison before and after surgery. It is desirable that patient that was diagnosed and planned by three dimensional computed tomography before surgery is evaluated by three dimensional computed tomography after surgery, too. But Because there is no standardized normal values in three dimension now and three dimensional Computed Tomography needs expensive equipment and because of its expenses and amount of exposure to radiation, limitations still remain to be solved in its application to routine practice. If postoperative three dimensional image is constructed by pre and postoperative lateral and postero-anterior cephalograms and preoperative three dimensional computed tomogram, pre and postoperative image will be compared and evaluated three dimensionally without three dimensional computed tomography after surgery and that will contribute to standardize normal values in three dimension. This study introduced new method that computer-simulated three dimensional image was constructed by preoperative three dimensional computed tomogram and pre and postoperative lateral and postero-anterior cephalograms, and for validation of new method, in four cases of dry skull that position of mandible was displaced and four patients of orthognathic surgery, computer-simulated three dimensional image and actual postoperative three dimensional image were compared. The results were as follows. 1. In four cases of

  3. Numerical simulation for neutron pinhole imaging in ICF

    International Nuclear Information System (INIS)

    Chen Faxin; Yang Jianlun; Wen Shuhuai

    2005-01-01

    Pinhole imaging of the neutron production in laser-driven inertial confinement fusion experiments can provide important information about performance of various capsule designs. In order to get good results in experiments, it is needed to judge performance of various pinhole designs qualitatively or quantitatively before experiment. Calculation of imaging can be simply separated into pinhole imaging and image spectral analysis. In this paper, pinhole imaging is discussed, codes for neutron pinhole imaging and image showing is programed. The codes can be used to provide theoretical foundation for pinhole designing and simulating data for image analysing. (authors)

  4. vECTlab-A fully integrated multi-modality Monte Carlo simulation framework for the radiological imaging sciences

    International Nuclear Information System (INIS)

    Peter, Joerg; Semmler, Wolfhard

    2007-01-01

    Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems

  5. Evaluation of patient dose using a virtual CT scanner: Applications to 4DCT simulation and Kilovoltage cone-beam imaging

    International Nuclear Information System (INIS)

    DeMarco, J J; Agazaryan, N; McNitt-Gray, M F; Cagnon, C H; Angel, E; Zankl, M

    2008-01-01

    This work evaluates the effects of patient size on radiation dose from simulation imaging studies such as four-dimensional computed tomography (4DCT) and kilovoltage cone-beam computed tomography (kV-CBCT). 4DCT studies are scans that include temporal information, frequently incorporating highly over-sampled imaging series necessary for retrospective sorting as a function of respiratory phase. This type of imaging study can result in a significant dose increase to the patient due to the slower table speed as compared with a conventional axial or helical scan protocol. Kilovoltage cone-beam imaging is a relatively new imaging technique that requires an on-board kilovoltage x-ray tube and a flat-panel detector. Instead of porting individual reference fields, the kV tube and flat-panel detector are rotated about the patient producing a cone-beam CT data set (kV-CBCT). To perform these investigations, we used Monte Carlo simulation methods with detailed models of adult patients and virtual source models of multidetector computed tomography (MDCT) scanners. The GSF family of three-dimensional, voxelized patient models, were implemented as input files using the Monte Carlo code MCNPX. The adult patient models represent a range of patient sizes and have all radiosensitive organs previously identified and segmented. Simulated 4DCT scans of each voxelized patient model were performed using a multi-detector CT source model that includes scanner specific spectra, bow-tie filtration, and helical source path. Standard MCNPX tally functions were applied to each model to estimate absolute organ dose based upon an air-kerma normalization measurement for nominal scanner operating parameters

  6. Ionospheric Simulation System for Satellite Observations and Global Assimilative Model Experiments - ISOGAME

    Science.gov (United States)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga; Stephens, Philip; Iijima, Bryron A.

    2013-01-01

    Modeling and imaging the Earth's ionosphere as well as understanding its structures, inhomogeneities, and disturbances is a key part of NASA's Heliophysics Directorate science roadmap. This invention provides a design tool for scientific missions focused on the ionosphere. It is a scientifically important and technologically challenging task to assess the impact of a new observation system quantitatively on our capability of imaging and modeling the ionosphere. This question is often raised whenever a new satellite system is proposed, a new type of data is emerging, or a new modeling technique is developed. The proposed constellation would be part of a new observation system with more low-Earth orbiters tracking more radio occultation signals broadcast by Global Navigation Satellite System (GNSS) than those offered by the current GPS and COSMIC observation system. A simulation system was developed to fulfill this task. The system is composed of a suite of software that combines the Global Assimilative Ionospheric Model (GAIM) including first-principles and empirical ionospheric models, a multiple- dipole geomagnetic field model, data assimilation modules, observation simulator, visualization software, and orbit design, simulation, and optimization software.

  7. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    Science.gov (United States)

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  8. Simulation of Optical and Synthetic Imaging using Microwave Reflectometry

    International Nuclear Information System (INIS)

    Kramer, G.J.; Nazikian, R.; Valeo, E.

    2004-01-01

    Two-dimensional full-wave time-dependent simulations in full plasma geometry are presented which show that conventional reflectometry (without a lens) can be used to synthetically image density fluctuations in fusion plasmas under conditions where the parallel correlation length greatly exceeds the poloidal correlation length of the turbulence. The advantage of synthetic imaging is that the image can be produced without the need for a large lens of high optical quality, and each frequency that is launched can be independently imaged. A particularly simple arrangement, consisting of a single receiver located at the midpoint of a microwave beam propagating along the plasma midplane is shown to suffice for imaging purposes. However, as the ratio of the parallel to poloidal correlation length decreases, a poloidal array of receivers needs to be used to synthesize the image with high accuracy. Simulations using DIII-D relevant parameters show the similarity of synthetic and optical imaging in present-day experiments

  9. Simulation of Optical and Synthetic Imaging using Microwave Reflectometry

    Energy Technology Data Exchange (ETDEWEB)

    G.J. Kramer; R. Nazikian; E. Valeo

    2004-01-16

    Two-dimensional full-wave time-dependent simulations in full plasma geometry are presented which show that conventional reflectometry (without a lens) can be used to synthetically image density fluctuations in fusion plasmas under conditions where the parallel correlation length greatly exceeds the poloidal correlation length of the turbulence. The advantage of synthetic imaging is that the image can be produced without the need for a large lens of high optical quality, and each frequency that is launched can be independently imaged. A particularly simple arrangement, consisting of a single receiver located at the midpoint of a microwave beam propagating along the plasma midplane is shown to suffice for imaging purposes. However, as the ratio of the parallel to poloidal correlation length decreases, a poloidal array of receivers needs to be used to synthesize the image with high accuracy. Simulations using DIII-D relevant parameters show the similarity of synthetic and optical imaging in present-day experiments.

  10. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  11. FY1995 fundamental study of imaging simulator for diagnostics and therapeutics using light; 1995 nendo hikari wo riyosuru shindan chiryoyo gazo simulator no kiso kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Medical application of lasers is rapidly expanding in accordance with the development of laser technology. However, it is difficult to predict how light propagates and is absorbed by living bodies because of strong scattering of light by biological tissues. Therefore, the determination of light dose has been based on experience. This fundamental study aims to develop a imaging simulator which can predict propagation of light and its effectiveness in medical diagnostics and therapeutics. Teoretical models of light propagation in biological tissues have been constructed, and experiments have been conducted to validate the theoretical calculation. In the theoretical calculation, a three-dimensional model which simulates a human head with five layers of different tissue types. Numerical calculation has been done by using the finite element method to simulate the propagation of ultrashort pulse light, and it is shown by a computer graphics technique for the first time in the world. In the experiment, a solid phantom which anatomically and optically simulates a human head based on MRI images has been fabricated by using the optical prototyping technology for the first time in the world again. Also, we have compared the experimental results of the transmitted light through the solid phantoms with the theoretical results and have succeeded in reconstructing the tomographic images of optical properties. (NEDO)

  12. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    Science.gov (United States)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  13. Calibration and Validation of a Detailed Architectural Canopy Model Reconstruction for the Simulation of Synthetic Hemispherical Images and Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Magnus Bremer

    2017-02-01

    Full Text Available Canopy density measures such as the Leaf Area Index (LAI have become standardized mapping products derived from airborne and terrestrial Light Detection And Ranging (aLiDAR and tLiDAR, respectively data. A specific application of LiDAR point clouds is their integration into radiative transfer models (RTM of varying complexity. Using, e.g., ray tracing, this allows flexible simulations of sub-canopy light condition and the simulation of various sensors such as virtual hemispherical images or waveform LiDAR on a virtual forest plot. However, the direct use of LiDAR data in RTMs shows some limitations in the handling of noise, the derivation of surface areas per LiDAR point and the discrimination of solid and porous canopy elements. In order to address these issues, a strategy upgrading tLiDAR and Digital Hemispherical Photographs (DHP into plausible 3D architectural canopy models is suggested. The presented reconstruction workflow creates an almost unbiased virtual 3D representation of branch and leaf surface distributions, minimizing systematic errors due to the object–sensor relationship. The models are calibrated and validated using DHPs. Using the 3D models for simulations, their capabilities for the description of leaf density distributions and the simulation of aLiDAR and DHP signatures are shown. At an experimental test site, the suitability of the models, in order to systematically simulate and evaluate aLiDAR based LAI predictions under various scan settings is proven. This strategy makes it possible to show the importance of laser point sampling density, but also the diversity of scan angles and their quantitative effect onto error margins.

  14. Integrative computational models of cardiac arrhythmias -- simulating the structurally realistic heart

    Science.gov (United States)

    Trayanova, Natalia A; Tice, Brock M

    2009-01-01

    Simulation of cardiac electrical function, and specifically, simulation aimed at understanding the mechanisms of cardiac rhythm disorders, represents an example of a successful integrative multiscale modeling approach, uncovering emergent behavior at the successive scales in the hierarchy of structural complexity. The goal of this article is to present a review of the integrative multiscale models of realistic ventricular structure used in the quest to understand and treat ventricular arrhythmias. It concludes with the new advances in image-based modeling of the heart and the promise it holds for the development of individualized models of ventricular function in health and disease. PMID:20628585

  15. Satellite image time series simulation for environmental monitoring

    Science.gov (United States)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of

  16. Multi-scale simulations of field ion microscopy images—Image compression with and without the tip shank

    International Nuclear Information System (INIS)

    NiewieczerzaŁ, Daniel; Oleksy, CzesŁaw; Szczepkowicz, Andrzej

    2012-01-01

    Multi-scale simulations of field ion microscopy images of faceted and hemispherical samples are performed using a 3D model. It is shown that faceted crystals have compressed images even in cases with no shank. The presence of the shank increases the compression of images of faceted crystals quantitatively in the same way as for hemispherical samples. It is hereby proven that the shank does not influence significantly the local, relative variations of the magnification caused by the atomic-scale structure of the sample. -- Highlights: ► Multi-scale simulations of field ion microscopy images. ► Faceted and hemispherical samples with and without shank. ► Shank causes overall compression, but does not influence local magnification effects. ► Image compression linearly increases with the shank angle. ► Shank changes compression of image of faceted tip in the same way as for smooth sample.

  17. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    Science.gov (United States)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2014-10-01

    IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.

  18. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    Science.gov (United States)

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  19. TH-A-BRF-11: Image Intensity Non-Uniformities Between MRI Simulation and Diagnostic MRI

    International Nuclear Information System (INIS)

    Paulson, E

    2014-01-01

    Purpose: MRI simulation for MRI-based radiotherapy demands that patients be setup in treatment position, which frequently involves use of alternative radiofrequency (RF) coil configurations to accommodate immobilized patients. However, alternative RF coil geometries may exacerbate image intensity non-uniformities (IINU) beyond those observed in diagnostic MRI, which may challenge image segmentation and registration accuracy as well as confound studies assessing radiotherapy response when MR simulation images are used as baselines for evaluation. The goal of this work was to determine whether differences in IINU exist between MR simulation and diagnostic MR images. Methods: ACR-MRI phantom images were acquired at 3T using a spin-echo sequence (TE/TR:20/500ms, rBW:62.5kHz, TH/skip:5/5mm). MR simulation images were obtained by wrapping two flexible phased-array RF coils around the phantom. Diagnostic MR images were obtained by placing the phantom into a commercial phased-array head coil. Pre-scan normalization was enabled in both cases. Images were transferred offline and corrected for IINU using the MNI N3 algorithm. Coefficients of variation (CV=σ/μ) were calculated for each slice. Wilcoxon matched-pairs and Mann-Whitney tests compared CV values between original and N3 images and between MR simulation and diagnostic MR images. Results: Significant differences in CV were detected between original and N3 images in both MRI simulation and diagnostic MRI groups (p=0.010, p=0.010). In addition, significant differences in CV were detected between original MR simulation and original and N3 diagnostic MR images (p=0.0256, p=0.0016). However, no significant differences in CV were detected between N3 MR simulation images and original or N3 diagnostic MR images, demonstrating the importance of correcting MR simulation images beyond pre-scan normalization prior to use in radiotherapy. Conclusions: Alternative RF coil configurations used in MRI simulation can Result in

  20. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  1. Simulation and measurement of total ionizing dose radiation induced image lag increase in pinned photodiode CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jing [School of Materials Science and Engineering, Xiangtan University, Hunan (China); State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Chen, Wei, E-mail: chenwei@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Wang, Zujun, E-mail: wangzujun@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China)

    2017-06-01

    This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt −60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.

  2. Simulation of single grid-based phase-contrast x-ray imaging (g-PCXI)

    Energy Technology Data Exchange (ETDEWEB)

    Lim, H.W.; Lee, H.W. [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Cho, H.S., E-mail: hscho1@yonsei.ac.kr [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Je, U.K.; Park, C.K.; Kim, K.S.; Kim, G.A.; Park, S.Y.; Lee, D.Y.; Park, Y.O.; Woo, T.H. [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Lee, S.H.; Chung, W.H.; Kim, J.W.; Kim, J.G. [R& D Center, JPI Healthcare Co., Ltd., Ansan 425-833 (Korea, Republic of)

    2017-04-01

    Single grid-based phase-contrast x-ray imaging (g-PCXI) technique, which was recently proposed by Wen et al. to retrieve absorption, scattering, and phase-gradient images from the raw image of the examined object, seems a practical method for phase-contrast imaging with great simplicity and minimal requirements on the setup alignment. In this work, we developed a useful simulation platform for g-PCXI and performed a simulation to demonstrate its viability. We also established a table-top setup for g-PCXI which consists of a focused-linear grid (200-lines/in strip density), an x-ray tube (100-μm focal spot size), and a flat-panel detector (48-μm pixel size) and performed a preliminary experiment with some samples to show the performance of the simulation platform. We successfully obtained phase-contrast x-ray images of much enhanced contrast from both the simulation and experiment and the simulated contract seemed similar to the experimental contrast, which shows the performance of the developed simulation platform. We expect that the simulation platform will be useful for designing an optimal g-PCXI system. - Highlights: • It is proposed for the single grid-based phase-contrast x-ray imaging (g-PCXI) technique. • We implemented for a numerical simulation code. • The preliminary experiment with several samples to compare is performed. • It is expected to be useful to design an optimal g-PCXI system.

  3. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  4. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  5. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  6. Safety Assessment of Advanced Imaging Sequences II: Simulations

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2016-01-01

    .6%, when using the impulse response of the probe estimated from an independent measurement. The accuracy is increased to between -22% to 24.5% for MI and between -33.2% to 27.0% for Ispta.3, when using the pressure response measured at a single point to scale the simulation. The spatial distribution of MI...... Mechanical Index (MI) and Ispta.3 as required by FDA. The method is performed on four different imaging schemes and compared to measurements conducted using the SARUS experimental scanner. The sequences include focused emissions with an F-number of 2 with 64 elements that generate highly non-linear fields....... The simulation time is between 0.67 ms to 2.8 ms per emission and imaging point, making it possible to simulate even complex emission sequences in less than 1 s for a single spatial position. The linear simulations yield a relative accuracy on MI between -12.1% to 52.3% and for Ispta.3 between -38.6% to 62...

  7. STEM image simulation with hybrid CPU/GPU programming

    International Nuclear Information System (INIS)

    Yao, Y.; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-01-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  8. STEM image simulation with hybrid CPU/GPU programming

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Y., E-mail: yaoyuan@iphy.ac.cn; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-07-15

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  9. Conceptual Design of Simulation Models in an Early Development Phase of Lunar Spacecraft Simulator Using SMP2 Standard

    Science.gov (United States)

    Lee, Hoon Hee; Koo, Cheol Hea; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok

    2013-08-01

    The conceptual study for Korean lunar orbiter/lander prototype has been performed in Korea Aerospace Research Institute (KARI). Across diverse space programs around European countries, a variety of simulation application has been developed using SMP2 (Simulation Modelling Platform) standard related to portability and reuse of simulation models by various model users. KARI has not only first-hand experience of a development of SMP compatible simulation environment but also an ongoing study to apply the SMP2 development process of simulation model to a simulator development project for lunar missions. KARI has tried to extend the coverage of the development domain based on SMP2 standard across the whole simulation model life-cycle from software design to its validation through a lunar exploration project. Figure. 1 shows a snapshot from a visualization tool for the simulation of lunar lander motion. In reality, a demonstrator prototype on the right-hand side of image was made and tested in 2012. In an early phase of simulator development prior to a kick-off start in the near future, targeted hardware to be modelled has been investigated and indentified at the end of 2012. The architectural breakdown of the lunar simulator at system level was performed and the architecture with a hierarchical tree of models from the system to parts at lower level has been established. Finally, SMP Documents such as Catalogue, Assembly, Schedule and so on were converted using a XML(eXtensible Mark-up Language) converter. To obtain benefits of the suggested approaches and design mechanisms in SMP2 standard as far as possible, the object-oriented and component-based design concepts were strictly chosen throughout a whole model development process.

  10. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    Science.gov (United States)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  11. Simulation of FIB-SEM images for analysis of porous microstructures.

    Science.gov (United States)

    Prill, Torben; Schladitz, Katja

    2013-01-01

    Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.

  12. Modeling And Simulation Of Multimedia Communication Networks

    Science.gov (United States)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  13. Multiscale image-based modeling and simulation of gas flow and particle transport in the human lungs

    Science.gov (United States)

    Tawhai, Merryn H; Hoffman, Eric A

    2013-01-01

    Improved understanding of structure and function relationships in the human lungs in individuals and sub-populations is fundamentally important to the future of pulmonary medicine. Image-based measures of the lungs can provide sensitive indicators of localized features, however to provide a better prediction of lung response to disease, treatment and environment, it is desirable to integrate quantifiable regional features from imaging with associated value-added high-level modeling. With this objective in mind, recent advances in computational fluid dynamics (CFD) of the bronchial airways - from a single bifurcation symmetric model to a multiscale image-based subject-specific lung model - will be reviewed. The interaction of CFD models with local parenchymal tissue expansion - assessed by image registration - allows new understanding of the interplay between environment, hot spots where inhaled aerosols could accumulate, and inflammation. To bridge ventilation function with image-derived central airway structure in CFD, an airway geometrical modeling method that spans from the model ‘entrance’ to the terminal bronchioles will be introduced. Finally, the effects of turbulent flows and CFD turbulence models on aerosol transport and deposition will be discussed. CFD simulation of airflow and particle transport in the human lung has been pursued by a number of research groups, whose interest has been in studying flow physics and airways resistance, improving drug delivery, or investigating which populations are most susceptible to inhaled pollutants. The three most important factors that need to be considered in airway CFD studies are lung structure, regional lung function, and flow characteristics. Their correct treatment is important because the transport of therapeutic or pollutant particles is dependent on the characteristics of the flow by which they are transported; and the airflow in the lungs is dependent on the geometry of the airways and how ventilation

  14. Multi-material 3D Models for Temporal Bone Surgical Simulation.

    Science.gov (United States)

    Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A

    2015-07-01

    A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.

  15. Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements

    Science.gov (United States)

    Arntsen, B.

    2017-12-01

    The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.

  16. Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions.

    Science.gov (United States)

    Wang, Kun; Schoonover, Robert W; Su, Richard; Oraevsky, Alexander; Anastasio, Mark A

    2014-05-01

    Optoacoustic tomography (OAT), also known as photoacoustic tomography, is an emerging computed biomedical imaging modality that exploits optical contrast and ultrasonic detection principles. Iterative image reconstruction algorithms that are based on discrete imaging models are actively being developed for OAT due to their ability to improve image quality by incorporating accurate models of the imaging physics, instrument response, and measurement noise. In this work, we investigate the use of discrete imaging models based on Kaiser-Bessel window functions for iterative image reconstruction in OAT. A closed-form expression for the pressure produced by a Kaiser-Bessel function is calculated, which facilitates accurate computation of the system matrix. Computer-simulation and experimental studies are employed to demonstrate the potential advantages of Kaiser-Bessel function-based iterative image reconstruction in OAT.

  17. Remote Ultra-low Light Imaging (RULLI) For Space Situational Awareness (SSA): Modeling And Simulation Results For Passive And Active SSA

    International Nuclear Information System (INIS)

    Thompson, David C.; Shirey, Robert L.; Roggemann, Michael C; Gudimetla, Rao

    2008-01-01

    Remote Ultra-Low Light Imaging detectors are photon limited detectors developed at Los Alamos National Laboratories. RULLI detectors provide a very high degree of temporal resolution for the arrival times of detected photoevents, but saturate at a photo-detection rate of about 10 6 photo-events per second. Rather than recording a conventional image, such as output by a charged coupled device (CCD) camera, the RULLI detector outputs a data stream consisting of the two-dimensional location, and time of arrival of each detected photo-electron. Hence, there is no need to select a specific exposure time to accumulate photo-events prior to the data collection with a RULLI detector this quantity can be optimized in post processing. RULLI detectors have lower peak quantum efficiency (from as low as 5% to perhaps as much as 40% with modern photocathode technology) than back-illuminated CCD's (80% or higher). As a result of these factors, and the associated analyses of signal and noise, we have found that RULLI detectors can play two key new roles in SSA: passive imaging of exceedingly dim objects, and three-dimensional imaging of objects illuminated with an appropriate pulsed laser. In this paper we describe the RULLI detection model, compare it to a conventional CCD detection model, and present analytic and simulation results to show the limits of performance of RULLI detectors used for SSA applications at AMOS field site

  18. Airflow in Tracheobronchial Tree of Subjects with Tracheal Bronchus Simulated Using CT Image Based Models and CFD Method.

    Science.gov (United States)

    Qi, Shouliang; Zhang, Baihua; Yue, Yong; Shen, Jing; Teng, Yueyang; Qian, Wei; Wu, Jianlin

    2018-03-01

    Tracheal Bronchus (TB) is a rare congenital anomaly characterized by the presence of an abnormal bronchus originating from the trachea or main bronchi and directed toward the upper lobe. The airflow pattern in tracheobronchial trees of TB subjects is critical, but has not been systemically studied. This study proposes to simulate the airflow using CT image based models and the computational fluid dynamics (CFD) method. Six TB subjects and three health controls (HC) are included. After the geometric model of tracheobronchial tree is extracted from CT images, the spatial distribution of velocity, wall pressure, wall shear stress (WSS) is obtained through CFD simulation, and the lobar distribution of air, flow pattern and global pressure drop are investigated. Compared with HC subjects, the main bronchus angle of TB subjects and the variation of volume are large, while the cross-sectional growth rate is small. High airflow velocity, wall pressure, and WSS are observed locally at the tracheal bronchus, but the global patterns of these measures are still similar to those of HC. The ratio of airflow into the tracheal bronchus accounts for 6.6-15.6% of the inhaled airflow, decreasing the ratio to the right upper lobe from 15.7-21.4% (HC) to 4.9-13.6%. The air into tracheal bronchus originates from the right dorsal near-wall region of the trachea. Tracheal bronchus does not change the global pressure drop which is dependent on multiple variables. Though the tracheobronchial trees of TB subjects present individualized features, several commonalities on the structural and airflow characteristics can be revealed. The observed local alternations might provide new insight into the reason of recurrent local infections, cough and acute respiratory distress related to TB.

  19. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N [University of Texas Health Science Center at San Antonio, Cancer Therapy and Research Center, San Antonio, TX (United States); Kim, H [University of California San Francisco, San Francisco, CA (United States)

    2015-06-15

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  20. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    International Nuclear Information System (INIS)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N; Kim, H

    2015-01-01

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  1. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  2. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  3. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  4. Uterus models for use in virtual reality hysteroscopy simulators.

    Science.gov (United States)

    Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias

    2009-05-01

    Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.

  5. Imaging cerebral haemorrhage with magnetic induction tomography: numerical modelling.

    Science.gov (United States)

    Zolgharni, M; Ledger, P D; Armitage, D W; Holder, D S; Griffiths, H

    2009-06-01

    Magnetic induction tomography (MIT) is a new electromagnetic imaging modality which has the potential to image changes in the electrical conductivity of the brain due to different pathologies. In this study the feasibility of detecting haemorrhagic cerebral stroke with a 16-channel MIT system operating at 10 MHz was investigated. The finite-element method combined with a realistic, multi-layer, head model comprising 12 different tissues, was used for the simulations in the commercial FE package, Comsol Multiphysics. The eddy-current problem was solved and the MIT signals computed for strokes of different volumes occurring at different locations in the brain. The results revealed that a large, peripheral stroke (volume 49 cm(3)) produced phase changes that would be detectable with our currently achievable instrumentation phase noise level (17 m degrees ) in 70 (27%) of the 256 exciter/sensor channel combinations. However, reconstructed images showed that a lower noise level than this, of 1 m degrees , was necessary to obtain good visualization of the strokes. The simulated MIT measurements were compared with those from an independent transmission-line-matrix model in order to give confidence in the results.

  6. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  7. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  8. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  9. Image simulation of high-speed imaging by high-pressure gas ionization detector

    International Nuclear Information System (INIS)

    Miao Jichen; Liu Ximing; Wu Zhifang

    2005-01-01

    The signal of the neighbor pixels is cumulated in Freight Train Inspection System because data fetch time is shorter than ion excursion time. This paper analyzes the pertinency of neighbor pixels and designs computer simulation method to generate some emulate images such as indicator image. The result indicates the high-pressure gas ionization detector can be used in high-speed digital radiography field. (authors)

  10. Correlating TEM images of damage in irradiated materials to molecular dynamics simulations

    International Nuclear Information System (INIS)

    Schaeublin, R.; Caturla, M.-J.; Wall, M.; Felter, T.; Fluss, M.; Wirth, B.D.; Diaz de la Rubia, T.; Victoria, M.

    2002-01-01

    TEM image simulations are used to couple the results from molecular dynamics (MD) simulations to experimental TEM images. In particular we apply this methodology to the study of defects produced during irradiation. MD simulations have shown that irradiation of FCC metals results in a population of vacancies and interstitials forming clusters. The limitation of these simulations is the short time scales available, on the order of 100 s of picoseconds. Extrapolation of the results from these short times to the time scales of the laboratory has been difficult. We address this problem by two methods: we perform TEM image simulations of MD simulations of cascades with an improved technique, to relate defects produced at short time scales with those observed experimentally at much longer time scales. On the other hand we perform in situ TEM experiments of Au irradiated at liquid-nitrogen temperatures, and study the evolution of the produced damage as the temperature is increased to room temperature. We find that some of the defects observed in the MD simulations at short time scales using the TEM image simulation technique have features that resemble those observed in laboratory TEM images of irradiated samples. In situ TEM shows that stacking fault tetrahedra are present at the lowest temperatures and are stable during annealing up to room temperature, while other defect clusters migrate one dimensionally above -100 deg. C. Results are presented here

  11. Model simulations of line-of-sight effects in airglow imaging of acoustic and fast gravity waves from ground and space

    Science.gov (United States)

    Aguilar Guerrero, J.; Snively, J. B.

    2017-12-01

    Acoustic waves (AWs) have been predicted to be detectable by imaging systems for the OH airglow layer [Snively, GRL, 40, 2013], and have been identified in spectrometer data [Pilger et al., JASP, 104, 2013]. AWs are weak in the mesopause region, but can attain large amplitudes in the F region [Garcia et al., GRL, 40, 2013] and have local impacts on the thermosphere and ionosphere. Similarly, fast GWs, with phase speeds over 100 m/s, may propagate to the thermosphere and impart significant local body forcing [Vadas and Fritts, JASTP, 66, 2004]. Both have been clearly identified in ionospheric total electron content (TEC), such as following the 2013 Moore, OK, EF5 tornado [Nishioka et al., GRL, 40, 2013] and following the 2011 Tohoku-Oki tsunami [e.g., Galvan et al., RS, 47, 2012, and references therein], but AWs have yet to be unambiguously imaged in MLT data and fast GWs have low amplitudes near the threshold of detection; nevertheless, recent imaging systems have sufficient spatial and temporal resolution and sensitivity to detect both AWs and fast GWs with short periods [e.g., Pautet et al., AO, 53, 2014]. The associated detectability challenges are related to the transient nature of their signatures and to systematic challenges due to line-of-sight (LOS) effects such as enhancements and cancelations due to integration along aligned or oblique wavefronts and geometric intensity enhancements. We employ a simulated airglow imager framework that incorporates 2D and 3D emission rate data and performs the necessary LOS integrations for synthetic imaging from ground- and space-based platforms to assess relative intensity and temperature perturbations. We simulate acoustic and fast gravity wave perturbations to the hydroxyl layer from a nonlinear, compressible model [e.g., Snively, 2013] for different idealized and realistic test cases. The results show clear signal enhancements when acoustic waves are imaged off-zenith or off-nadir and the temporal evolution of these

  12. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, which provides a realistic model of the normal human anatomy and cardiac and respiratory motions, is used in medical imaging research to evaluate and improve imaging devices and techniques, especially dynamic cardiac applications. One limitation of the phantom is that it lacks the ability to accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). The goal of this work was to enhance the 4D NCAT phantom by incorporating a physiologically based, finite-element (FE) mechanical model of the left ventricle (LV) to simulate both normal and abnormal cardiac motions. The geometry of the FE mechanical model was based on gated high-resolution x-ray multi-slice computed tomography (MSCT) data of a healthy male subject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees at the epicardial surface, through 0 degrees at the mid-wall, to 90 degrees at the endocardial surface. A time varying elastance model was used to simulate fiber contraction, and physiological intraventricular systolic pressure-time curves were applied to simulate the cardiac motion over the entire cardiac cycle. To demonstrate the ability of the FE mechanical model to accurately simulate the normal cardiac motion as well abnormal motions indicative of CAD, a normal case and two pathologic cases were simulated and analyzed. In the first pathologic model, a subendocardial anterior ischemic region was defined. A second model was created with a transmural ischemic region defined in the same location. The FE based deformations were incorporated into the 4D NCAT cardiac model through the control points that define the cardiac structures in the phantom which were set to move according to the predictions of the mechanical model. A simulation study was performed using the FE-NCAT combination to investigate how the

  13. Interactive virtual simulation using a 3D computer graphics model for microvascular decompression surgery.

    Science.gov (United States)

    Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko

    2012-09-01

    The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p computer graphics model provided a realistic environment for performing virtual simulations prior to MVD surgery and enabled us to ascertain complex microsurgical anatomy.

  14. Coupling biomechanics to a cellular level model: an approach to patient-specific image driven multi-scale and multi-physics tumor simulation.

    Science.gov (United States)

    May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe

    2011-10-01

    Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-01-01

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  16. Lithographic image simulation for the 21st century with 19th-century tools

    Science.gov (United States)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask

  17. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model.

    Science.gov (United States)

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses' placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF) was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient's eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  18. Automated numerical simulation of biological pattern formation based on visual feedback simulation framework.

    Science.gov (United States)

    Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin

    2017-01-01

    There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.

  19. Deterministic simulation of first-order scattering in virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D

    2004-07-01

    A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.

  20. Monte-Carlo simulation of spatial resolution of an image intensifier in a saturation mode

    Science.gov (United States)

    Xie, Yuntao; Wang, Xi; Zhang, Yujun; Sun, Xiaoquan

    2018-04-01

    In order to investigate the spatial resolution of an image intensifier which is irradiated by high-energy pulsed laser, a three-dimensional electron avalanche model was built and the cascade process of the electrons was numerically simulated. The influence of positive wall charges, due to the failure of replenishing charges extracted from the channel during the avalanche, was considered by calculating its static electric field through particle-in-cell (PIC) method. By tracing the trajectory of electrons throughout the image intensifier, the energy of the electrons at the output of the micro channel plate and the electron distribution at the phosphor screen are numerically calculated. The simulated energy distribution of output electrons are in good agreement with experimental data of previous studies. In addition, the FWHM extensions of the electron spot at phosphor screen as a function of the number of incident electrons are calculated. The results demonstrate that the spot size increases significantly with the increase in the number of incident electrons. Furthermore, we got the MTFs of the image intensifier by Fourier transform of a point spread function at phosphor screen. Comparison between the MTFs in our model and the MTFs by analytic method shows that spatial resolution of the image intensifier decreases significantly as the number of incident electrons increases, and it is particularly obvious when incident electron number greater than 100.

  1. Phase contrast image simulations for electron holography of magnetic and electric fields

    DEFF Research Database (Denmark)

    Beleggia, Marco; Pozzi, Giulio

    2013-01-01

    representation of the magnetic vector potential, that enables us to simulate realistic phase images of fluxons. The aim of this paper is to review the main ideas underpinning our computational framework and the results we have obtained throughout the collaboration. Furthermore, we outline how to generalize...... the approach to model other samples and structures of interest, in particular thin ferromagnetic films, ferromagnetic nanoparticles and p–n junctions....

  2. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  3. Relationships of virtual reality neuroendoscopic simulations to actual imaging.

    Science.gov (United States)

    Riegel, T; Alberti, O; Retsch, R; Shiratori, V; Hellwig, D; Bertalanffy, H

    2000-12-01

    Advances in computer technology have permitted virtual reality images of the ventricular system. To determine the relevance of these images we have compared virtual reality simulations of the ventricular system with endoscopic findings in three patients. The virtual fly-through can be simulated after definition of waypoints. Flight objects of interest can be viewed from all sides. Important drawbacks are that filigree structures may be missed and blood vessels cannot be distinguished clearly. However, virtual endoscopy can presently be used as a planning tool or for training and has future potential for neurosurgery.

  4. Implementation of angular response function modeling in SPECT simulations with GATE

    International Nuclear Information System (INIS)

    Descourt, P; Visvikis, D; Carlier, T; Bardies, M; Du, Y; Song, X; Frey, E C; Tsui, B M W; Buvat, I

    2010-01-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  5. Implementation of angular response function modeling in SPECT simulations with GATE

    Energy Technology Data Exchange (ETDEWEB)

    Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)

    2010-05-07

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  6. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  7. Noise simulation and rejection for the DELPHI Barrel Ring Imaging Cherenkov detector

    International Nuclear Information System (INIS)

    Bloch, D.

    1996-01-01

    The performance of Ring Imaging Cherenkov detectors is severely affected by the background noise due to the necessity of detecting single electrons. Furthermore, in the majority of the existing RICHs, the charged particles to be identified also cross the sensitive area of the apparatus thus creating secondary effects. The different noise sources and the background behaviour have been studied for the DELPHI RICH in order to efficiently clean the Cherenkov rings from the background while preserving the majority of the signal. Particular care has been taken to optimize the parameters of the Cherenkov image ''cleaning'' for the gas and the liquid radiators separately. For Z 0 hadronic decays 70% background rejection has been achieved, whilst 85% of the signal has been retained. This paper also presents a simulation of the noise producing mechanisms where ionization electrons, δ-rays, feedback electrons created during avalanches and electronic noise are modeled according to the measured parameters. Good agreement between data and simulation has been achieved. (orig.)

  8. Simulation model for transcervical laryngeal injection providing real-time feedback.

    Science.gov (United States)

    Ainsworth, Tiffiny A; Kobler, James B; Loan, Gregory J; Burns, James A

    2014-12-01

    This study aimed to develop and evaluate a model for teaching transcervical laryngeal injections. A 3-dimensional printer was used to create a laryngotracheal framework based on de-identified computed tomography images of a human larynx. The arytenoid cartilages and intrinsic laryngeal musculature were created in silicone from clay casts and thermoplastic molds. The thyroarytenoid (TA) muscle was created with electrically conductive silicone using metallic filaments embedded in silicone. Wires connected TA muscles to an electrical circuit incorporating a cell phone and speaker. A needle electrode completed the circuit when inserted in the TA during simulated injection, providing real-time feedback of successful needle placement by producing an audible sound. Face validation by the senior author confirmed appropriate tactile feedback and anatomical realism. Otolaryngologists pilot tested the model and completed presimulation and postsimulation questionnaires. The high-fidelity simulation model provided tactile and audio feedback during needle placement, simulating transcervical vocal fold injections. Otolaryngology residents demonstrated higher comfort levels with transcervical thyroarytenoid injection on postsimulation questionnaires. This is the first study to describe a simulator for developing transcervical vocal fold injection skills. The model provides real-time tactile and auditory feedback that aids in skill acquisition. Otolaryngologists reported increased confidence with transcervical injection after using the simulator. © The Author(s) 2014.

  9. Modeling of skin cancer dermatoscopy images

    Science.gov (United States)

    Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.

    2018-04-01

    An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.

  10. Data of NODDI diffusion metrics in the brain and computer simulation of hybrid diffusion imaging (HYDI acquisition scheme

    Directory of Open Access Journals (Sweden)

    Chandana Kodiweera

    2016-06-01

    Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].

  11. Proceedings of the 6. IASTED conference on modelling, simulation, and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Nyongesa, H. [Botswana Univ., Gaborone (Botswana). Dept. of Computer Science] (ed.)

    2006-07-01

    This conference presented a variety of new optimization and simulation tools for use in several scientific fields. Neural network-based simulation tools were presented, as well as new approaches to optimizing artificial intelligence simulation models. Approaches to image compression were discussed. Control strategies and systems analysis methodologies were presented. Other topics included Gaussian mixture models; helical transformation; fault diagnosis; and stochastic dynamics in economical applications. Decision support system models were also discussed in addition to recursive approaches to virtualization, and intelligent designs for the provision of HIV treatments in Africa. The conference was divided into 8 sessions: (1) scientific applications; (2) system design; (3) environmental applications; (4) economic and financial applications; (5) modelling techniques; (6) general methods; (7) special session; and (8) additional papers. The conference featured 56 presentations, of which 5 of which have been catalogued separately for inclusion in this database. refs., tabs., figs.

  12. Model-based crosstalk compensation for simultaneous 99mTc/123I dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous 123I and 99mTc dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy 123I photons, are modeled using precalculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination.

  13. Model-based crosstalk compensation for simultaneous Tc99m∕I123 dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous I123 and Tc99m dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy I123 photons, are modeled using pre-calculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination. © 2007 American Association of Physicists in Medicine.

  14. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    Science.gov (United States)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  15. An Image-based Micro-continuum Pore-scale Model for Gas Transport in Organic-rich Shale

    Science.gov (United States)

    Guo, B.; Tchelepi, H.

    2017-12-01

    Gas production from unconventional source rocks, such as ultra-tight shales, has increased significantly over the past decade. However, due to the extremely small pores ( 1-100 nm) and the strong material heterogeneity, gas flow in shale is still not well understood and poses challenges for predictive field-scale simulations. In recent years, digital rock analysis has been applied to understand shale gas transport at the pore-scale. An issue with rock images (e.g. FIB-SEM, nano-/micro-CT images) is the so-called "cutoff length", i.e., pores and heterogeneities below the resolution cannot be resolved, which leads to two length scales (resolved features and unresolved sub-resolution features) that are challenging for flow simulations. Here we develop a micro-continuum model, modified from the classic Darcy-Brinkman-Stokes framework, that can naturally couple the resolved pores and the unresolved nano-porous regions. In the resolved pores, gas flow is modeled with Stokes equation. In the unresolved regions where the pore sizes are below the image resolution, we develop an apparent permeability model considering non-Darcy flow at the nanoscale including slip flow, Knudsen diffusion, adsorption/desorption, surface diffusion, and real gas effect. The end result is a micro-continuum pore-scale model that can simulate gas transport in 3D reconstructed shale images. The model has been implemented in the open-source simulation platform OpenFOAM. In this paper, we present case studies to demonstrate the applicability of the model, where we use 3D segmented FIB-SEM and nano-CT shale images that include four material constituents: organic matter, clay, granular mineral, and pore. In addition to the pore structure and the distribution of the material constituents, we populate the model with experimental measurements (e.g. size distribution of the sub-resolution pores from nitrogen adsorption) and parameters from the literature and identify the relative importance of different

  16. Numerical Simulation of Partially-Coherent Broadband Optical Imaging Using the FDTD Method

    Science.gov (United States)

    Çapoğlu, İlker R.; White, Craig A.; Rogers, Jeremy D.; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2012-01-01

    Rigorous numerical modeling of optical systems has attracted interest in diverse research areas ranging from biophotonics to photolithography. We report the full-vector electromagnetic numerical simulation of a broadband optical imaging system with partially-coherent and unpolarized illumination. The scattering of light from the sample is calculated using the finite-difference time-domain (FDTD) numerical method. Geometrical optics principles are applied to the scattered light to obtain the intensity distribution at the image plane. Multilayered object spaces are also supported by our algorithm. For the first time, numerical FDTD calculations are directly compared to and shown to agree well with broadband experimental microscopy results. PMID:21540939

  17. Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

    Science.gov (United States)

    Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

    1997-06-01

    In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

  18. Hybrid simulation using mixed reality for interventional ultrasound imaging training.

    Science.gov (United States)

    Freschi, C; Parrini, S; Dinelli, N; Ferrari, M; Ferrari, V

    2015-07-01

    Ultrasound (US) imaging offers advantages over other imaging modalities and has become the most widespread modality for many diagnostic and interventional procedures. However, traditional 2D US requires a long training period, especially to learn how to manipulate the probe. A hybrid interactive system based on mixed reality was designed, implemented and tested for hand-eye coordination training in diagnostic and interventional US. A hybrid simulator was developed integrating a physical US phantom and a software application with a 3D virtual scene. In this scene, a 3D model of the probe with its relative scan plane is coherently displayed with a 3D representation of the phantom internal structures. An evaluation study of the diagnostic module was performed by recruiting thirty-six novices and four experts. The performances of the hybrid (HG) versus physical (PG) simulator were compared. After the training session, each novice was required to visualize a particular target structure. The four experts completed a 5-point Likert scale questionnaire. Seventy-eight percentage of the HG novices successfully visualized the target structure, whereas only 45% of the PG reached this goal. The mean scores from the questionnaires were 5.00 for usefulness, 4.25 for ease of use, 4.75 for 3D perception, and 3.25 for phantom realism. The hybrid US training simulator provides ease of use and is effective as a hand-eye coordination teaching tool. Mixed reality can improve US probe manipulation training.

  19. Restoration of polarimetric SAR images using simulated annealing

    DEFF Research Database (Denmark)

    Schou, Jesper; Skriver, Henning

    2001-01-01

    approach favoring one of the objectives. An algorithm for estimating the radar cross-section (RCS) for intensity SAR images has previously been proposed in the literature based on Markov random fields and the stochastic optimization method simulated annealing. A new version of the algorithm is presented......Filtering synthetic aperture radar (SAR) images ideally results in better estimates of the parameters characterizing the distributed targets in the images while preserving the structures of the nondistributed targets. However, these objectives are normally conflicting, often leading to a filtering...

  20. Microcomputer simulation of nuclear magnetic resonance imaging contrasts

    International Nuclear Information System (INIS)

    Le Bihan, D.

    1985-01-01

    The high information content of magnetic resonance images is due to the multiplicity of its parameters. However, this advantage introduces a difficulty in the interpretation of the contrast: an image is strongly modified according to the visualised parameters. The author proposes a micro-computer simulation program. After recalling the main intrinsic and extrinsic parameters, he shows how the program works and its interest as a pedagogic tool and as an aid for contrast optimisation of images as a function of the suspected pathology [fr

  1. SU-E-J-234: Application of a Breathing Motion Model to ViewRay Cine MR Images

    International Nuclear Information System (INIS)

    O’Connell, D. P.; Thomas, D. H.; Dou, T. H.; Lamb, J. M.; Yang, L.; Low, D. A.

    2015-01-01

    Purpose: A respiratory motion model previously used to generate breathing-gated CT images was used with cine MR images. Accuracy and predictive ability of the in-plane models were evaluated. Methods: Sagittalplane cine MR images of a patient undergoing treatment on a ViewRay MRI/radiotherapy system were acquired before and during treatment. Images were acquired at 4 frames/second with 3.5 × 3.5 mm resolution and a slice thickness of 5 mm. The first cine frame was deformably registered to following frames. Superior/inferior component of the tumor centroid position was used as a breathing surrogate. Deformation vectors and surrogate measurements were used to determine motion model parameters. Model error was evaluated and subsequent treatment cines were predicted from breathing surrogate data. A simulated CT cine was created by generating breathing-gated volumetric images at 0.25 second intervals along the measured breathing trace, selecting a sagittal slice and downsampling to the resolution of the MR cines. A motion model was built using the first half of the simulated cine data. Model accuracy and error in predicting the remaining frames of the cine were evaluated. Results: Mean difference between model predicted and deformably registered lung tissue positions for the 28 second preview MR cine acquired before treatment was 0.81 +/− 0.30 mm. The model was used to predict two minutes of the subsequent treatment cine with a mean accuracy of 1.59 +/− 0.63 mm. Conclusion: Inplane motion models were built using MR cine images and evaluated for accuracy and ability to predict future respiratory motion from breathing surrogate measurements. Examination of long term predictive ability is ongoing. The technique was applied to simulated CT cines for further validation, and the authors are currently investigating use of in-plane models to update pre-existing volumetric motion models used for generation of breathing-gated CT planning images

  2. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  3. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  4. Imaging of structures in the high-latitude ionosphere: model comparisons

    Directory of Open Access Journals (Sweden)

    D. W. Idenden

    Full Text Available The tomographic reconstruction technique generates a two-dimensional latitude versus height electron density distribution from sets of slant total electron content measurements (TEC along ray paths between beacon satellites and ground-based radio receivers. In this note, the technique is applied to TEC values obtained from data simulated by the Sheffield/UCL/SEL Coupled Thermosphere/Ionosphere/Model (CTIM. A comparison of the resulting reconstructed image with the 'input' modelled data allows for verification of the reconstruction technique. All the features of the high-latitude ionosphere in the model data are reproduced well in the tomographic image. Reconstructed vertical TEC values follow closely the modelled values, with the F-layer maximum density (NmF2 agreeing generally within about 10%. The method has also been able successfully to reproduce underlying auroral-E ionisation over a restricted latitudinal range in part of the image. The height of the F2 peak is generally in agreement to within about the vertical image resolution (25 km.

    Key words. Ionosphere (modelling and forecasting; polar ionosphere · Radio Science (instruments and techniques

  5. Extending simulation modeling to activity-based costing for clinical procedures.

    Science.gov (United States)

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.

  6. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    Directory of Open Access Journals (Sweden)

    Georgios Zoulinakis

    2017-01-01

    Full Text Available Purpose. To design an intraocular telescopic system (ITS for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient’s eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  7. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  8. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  9. Image simulation and a model of noise power spectra across a range of mammographic beam qualities

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); Diaz, Oliver [Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom and Computer Vision and Robotics Research Institute, University of Girona, Girona 17071 (Spain)

    2014-12-15

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise

  10. 3D Rapid Prototyping for Otolaryngology-Head and Neck Surgery: Applications in Image-Guidance, Surgical Simulation and Patient-Specific Modeling.

    Science.gov (United States)

    Chan, Harley H L; Siewerdsen, Jeffrey H; Vescan, Allan; Daly, Michael J; Prisman, Eitan; Irish, Jonathan C

    2015-01-01

    The aim of this study was to demonstrate the role of advanced fabrication technology across a broad spectrum of head and neck surgical procedures, including applications in endoscopic sinus surgery, skull base surgery, and maxillofacial reconstruction. The initial case studies demonstrated three applications of rapid prototyping technology are in head and neck surgery: i) a mono-material paranasal sinus phantom for endoscopy training ii) a multi-material skull base simulator and iii) 3D patient-specific mandible templates. Digital processing of these phantoms is based on real patient or cadaveric 3D images such as CT or MRI data. Three endoscopic sinus surgeons examined the realism of the endoscopist training phantom. One experienced endoscopic skull base surgeon conducted advanced sinus procedures on the high-fidelity multi-material skull base simulator. Ten patients participated in a prospective clinical study examining patient-specific modeling for mandibular reconstructive surgery. Qualitative feedback to assess the realism of the endoscopy training phantom and high-fidelity multi-material phantom was acquired. Conformance comparisons using assessments from the blinded reconstructive surgeons measured the geometric performance between intra-operative and pre-operative reconstruction mandible plates. Both the endoscopy training phantom and the high-fidelity multi-material phantom received positive feedback on the realistic structure of the phantom models. Results suggested further improvement on the soft tissue structure of the phantom models is necessary. In the patient-specific mandible template study, the pre-operative plates were judged by two blinded surgeons as providing optimal conformance in 7 out of 10 cases. No statistical differences were found in plate fabrication time and conformance, with pre-operative plating providing the advantage of reducing time spent in the operation room. The applicability of common model design and fabrication techniques

  11. Radar image enhancement and simulation as an aid to interpretation and training

    Science.gov (United States)

    Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Dellwig, L. F.; Held, D. N.

    1980-01-01

    Greatly increased activity in the field of radar image applications in the coming years demands that techniques of radar image analysis, enhancement, and simulation be developed now. Since the statistical nature of radar imagery differs from that of photographic imagery, one finds that the required digital image processing algorithms (e.g., for improved viewing and feature extraction) differ from those currently existing. This paper addresses these problems and discusses work at the Remote Sensing Laboratory in image simulation and processing, especially for systems comparable to the formerly operational SEASAT synthetic aperture radar.

  12. Identification of a Common Binding Mode for Imaging Agents to Amyloid Fibrils from Molecular Dynamics Simulations

    DEFF Research Database (Denmark)

    Skeby, Katrine Kirkeby; Sørensen, Jesper; Schiøtt, Birgit

    2013-01-01

    experimentally due to the insoluble nature of amyloid fibrils. This study uses molecular dynamics simulations to investigate the interactions between 13 aromatic amyloid imaging agents, entailing 4 different organic scaffolds, and a model of an amyloid fibril. Clustering analysis combined with free energy...

  13. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical model was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function

  14. A multicenter evaluation of seven commercial ML-EM algorithms for SPECT image reconstruction using simulation data

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo

    2003-01-01

    The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)

  15. Imaging system models for small-bore DOI-PET scanners

    International Nuclear Information System (INIS)

    Takahashi, Hisashi; Kobayashi, Tetsuya; Yamaya, Taiga; Murayama, Hideo; Kitamura, Keishi; Hasegawa, Tomoyuki; Suga, Mikio

    2006-01-01

    Depth-of-interaction (DOI) information, which improves resolution uniformity in the field of view (FOV), is expected to lead to high-sensitivity PET scanners with small-bore detector rings. We are developing small-bore PET scanners with DOI detectors arranged in hexagonal or overlapped tetragonal patterns for small animal imaging or mammography. It is necessary to optimize the imaging system model because these scanners exhibit irregular detector sampling. In this work, we compared two imaging system models: (a) a parallel sub-LOR model in which the detector response functions (DRFs) are assumed to be uniform along the line of responses (LORs) and (b) a sub-crystal model in which each crystal is divided into a set of smaller volumes. These two models were applied to the overlapped tetragonal scanner (FOV 38.1 mm in diameter) and the hexagonal scanner (FOV 85.2 mm in diameter) simulated by GATE. We showed that the resolution non-uniformity of system model (b) was improved by 40% compared with that of system model (a) in the overlapped tetragonal scanner and that the resolution non-uniformity of system model (a) was improved by 18% compared with that of system model (b) in the hexagonal scanner. These results indicate that system model (b) should be applied to the overlapped tetragonal scanner and system model (a) should be applied to the hexagonal scanner. (author)

  16. WRF-Chem Model Simulations of Arizona Dust Storms

    Science.gov (United States)

    Mohebbi, A.; Chang, H. I.; Hondula, D.

    2017-12-01

    The online Weather Research and Forecasting model with coupled chemistry module (WRF-Chem) is applied to simulate the transport, deposition and emission of the dust aerosols in an intense dust outbreak event that took place on July 5th, 2011 over Arizona. Goddard Chemistry Aerosol Radiation and Transport (GOCART), Air Force Weather Agency (AFWA), and University of Cologne (UoC) parameterization schemes for dust emission were evaluated. The model was found to simulate well the synoptic meteorological conditions also widely documented in previous studies. The chemistry module performance in reproducing the atmospheric desert dust load was evaluated using the horizontal field of the Aerosol Optical Depth (AOD) from Moderate Resolution Imaging Spectro (MODIS) radiometer Terra/Aqua and Aerosol Robotic Network (AERONET) satellites employing standard Dark Target (DT) and Deep Blue (DB) algorithms. To assess the temporal variability of the dust storm, Particulate Matter mass concentration data (PM10 and PM2.5) from Arizona Department of Environmental Quality (AZDEQ) ground-based air quality stations were used. The promising performance of WRF-Chem indicate that the model is capable of simulating the right timing and loading of a dust event in the planetary-boundary-layer (PBL) which can be used to forecast approaching severe dust events and to communicate an effective early warning.

  17. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  18. Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion.

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, Matthew; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik; Frank, Jonathan H.

    2010-09-01

    Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

  19. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  20. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    Science.gov (United States)

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    Energy Technology Data Exchange (ETDEWEB)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle [Telecom ParisTech, CNRS UMR 5141 LTCI, Institut TELECOM, Paris (France)

    2010-01-15

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  2. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    International Nuclear Information System (INIS)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle

    2010-01-01

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  3. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    constraints, followed by a refinement course with similarity constraint and robust checking. A series of temporal Single-Lens Reflex (SLR and High-Speed Camera (HSC stereo images captured during the simulated landslide experiment performed on the campus of Tongji University, Shanghai, were employed to illustrate the proposed method, and the dense and reliable image matching results were obtained. Finally, a series of temporal Digital Surface Models (DSM in the landslide process were constructed using the close-range photogrammetry technique, followed by the discussion of the landslide volume changes and surface elevation changes during the simulation experiment.

  4. Diffuse scattering and image contrast of tweed in superconducting oxides: A simulation and interpretation

    International Nuclear Information System (INIS)

    Zhu, Yimei; Cai, Zhi-Xiong.

    1993-01-01

    Monte Carlo simulations were performed with a lattice gas model which represents the interactions between oxygen atoms in YBa 2 (Cu 1-x M x ) 3 O 7+δ (M=Fe, Co, or Al, 0.03< x <0.l) system. The amplitudes of concentration waves/displacement waves obtained from these simulations then were used to calculate the intensity of the diffuse scattering of tweed seen in the electron diffraction pattern. The characteristic features of the tweed image were produced by calculation, using a model based on the contrast originating from structures with displacive modulation, stacking on the top of each other. Both calculations agree well with the TEM observations and provide an useful basis for a better insight into the origin of the tweed structure

  5. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    International Nuclear Information System (INIS)

    Michail, C M; Fountos, G P; Kalyvas, N I; Valais, I G; Kandarakis, I S; Karpetas, G E; Martini, Niki; Koukou, Vaia

    2015-01-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations. (paper)

  6. Superresolving Black Hole Images with Full-Closure Sparse Modeling

    Science.gov (United States)

    Crowley, Chelsea; Akiyama, Kazunori; Fish, Vincent

    2018-01-01

    It is believed that almost all galaxies have black holes at their centers. Imaging a black hole is a primary objective to answer scientific questions relating to relativistic accretion and jet formation. The Event Horizon Telescope (EHT) is set to capture images of two nearby black holes, Sagittarius A* at the center of the Milky Way galaxy roughly 26,000 light years away and the other M87 which is in Virgo A, a large elliptical galaxy that is 50 million light years away. Sparse imaging techniques have shown great promise for reconstructing high-fidelity superresolved images of black holes from simulated data. Previous work has included the effects of atmospheric phase errors and thermal noise, but not systematic amplitude errors that arise due to miscalibration. We explore a full-closure imaging technique with sparse modeling that uses closure amplitudes and closure phases to improve the imaging process. This new technique can successfully handle data with systematic amplitude errors. Applying our technique to synthetic EHT data of M87, we find that full-closure sparse modeling can reconstruct images better than traditional methods and recover key structural information on the source, such as the shape and size of the predicted photon ring. These results suggest that our new approach will provide superior imaging performance for data from the EHT and other interferometric arrays.

  7. Image Simulation and Assessment of the Colour and Spatial Capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    Science.gov (United States)

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicholas; Caudill, C. M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-02-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally ( i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ˜18-36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18-36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three "fully"-simulated image cubes of thirty unique locations on Mars ( i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that Ca

  8. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  9. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  10. Simulating Visible/Infrared Imager Radiometer Suite Normalized Difference Vegetation Index Data Using Hyperion and MODIS

    Science.gov (United States)

    Ross, Kenton W.; Russell, Jeffrey; Ryan, Robert E.

    2006-01-01

    The success of MODIS (the Moderate Resolution Imaging Spectrometer) in creating unprecedented, timely, high-quality data for vegetation and other studies has created great anticipation for data from VIIRS (the Visible/Infrared Imager Radiometer Suite). VIIRS will be carried onboard the joint NASA/Department of Defense/National Oceanic and Atmospheric Administration NPP (NPOESS (National Polar-orbiting Operational Environmental Satellite System) Preparatory Project). Because the VIIRS instruments will have lower spatial resolution than the current MODIS instruments 400 m versus 250 m at nadir for the channels used to generate Normalized Difference Vegetation Index data, scientists need the answer to this question: how will the change in resolution affect vegetation studies? By using simulated VIIRS measurements, this question may be answered before the VIIRS instruments are deployed in space. Using simulated VIIRS products, the U.S. Department of Agriculture and other operational agencies can then modify their decision support systems appropriately in preparation for receipt of actual VIIRS data. VIIRS simulations and validations will be based on the ART (Application Research Toolbox), an integrated set of algorithms and models developed in MATLAB(Registerd TradeMark) that enables users to perform a suite of simulations and statistical trade studies on remote sensing systems. Specifically, the ART provides the capability to generate simulated multispectral image products, at various scales, from high spatial hyperspectral and/or multispectral image products. The ART uses acquired ( real ) or synthetic datasets, along with sensor specifications, to create simulated datasets. For existing multispectral sensor systems, the simulated data products are used for comparison, verification, and validation of the simulated system s actual products. VIIRS simulations will be performed using Hyperion and MODIS datasets. The hyperspectral and hyperspatial properties of Hyperion

  11. Optimization of accelerator target and detector for portal imaging using Monte Carlo simulation and experiment

    International Nuclear Information System (INIS)

    Flampouri, S.; Evans, P.M.; Partridge, M.; Nahum, A.E.; Verhaegen, A.E.; Spezi, E.

    2002-01-01

    Megavoltage portal images suffer from poor quality compared to those produced with kilovoltage x-rays. Several authors have shown that the image quality can be improved by modifying the linear accelerator to generate more low-energy photons. This work addresses the problem of using Monte Carlo simulation and experiment to optimize the beam and detector combination to maximize image quality for a given patient thickness. A simple model of the whole imaging chain was developed for investigation of the effect of the target parameters on the quality of the image. The optimum targets (6 mm thick aluminium and 1.6 mm copper) were installed in an Elekta SL25 accelerator. The first beam will be referred to as Al6 and the second as Cu1.6. A tissue-equivalent contrast phantom was imaged with the 6 MV standard photon beam and the experimental beams with standard radiotherapy and mammography film/screen systems. The arrangement with a thin Al target/mammography system improved the contrast from 1.4 cm bone in 5 cm water to 19% compared with 2% for the standard arrangement of a thick, high-Z target/radiotherapy verification system. The linac/phantom/detector system was simulated with the BEAM/EGS4 Monte Carlo code. Contrast calculated from the predicted images was in good agreement with the experiment (to within 2.5%). The use of MC techniques to predict images accurately, taking into account the whole imaging system, is a powerful new method for portal imaging system design optimization. (author)

  12. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  13. Simulation and modeling for the stand-off radiation detection system (SORDS) using GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, Andrew S [Los Alamos National Laboratory; Wallace, Mark [Los Alamos National Laboratory; Galassi, Mark [Los Alamos National Laboratory; Mocko, Michal [Los Alamos National Laboratory; Palmer, David [Los Alamos National Laboratory; Schultz, Larry [Los Alamos National Laboratory; Tornga, Shawn [Los Alamos National Laboratory

    2009-01-01

    A Stand-Off Radiation Detection System (SORDS) is being developed through a joint effort by Raytheon, Los Alamos National Laboratory, Bubble Technology Industries, Radiation Monitoring Devices, and the Massachusetts Institute of Technology, for the Domestic Nuclear Detection Office (DNDO). The system is a mobile truck-based platform performing detection, imaging, and spectroscopic identification of gamma-ray sources. A Tri-Modal Imaging (TMI) approach combines active-mask coded aperture imaging, Compton imaging, and shadow imaging techniques. Monte Carlo simulation and modeling using the GEANT4 toolkit was used to generate realistic data for the development of imaging algorithms and associated software code.

  14. Compton scatter imaging: A promising modality for image guidance in lung stereotactic body radiation therapy.

    Science.gov (United States)

    Redler, Gage; Jones, Kevin C; Templeton, Alistair; Bernard, Damian; Turian, Julius; Chu, James C H

    2018-03-01

    Lung stereotactic body radiation therapy (SBRT) requires delivering large radiation doses with millimeter accuracy, making image guidance essential. An approach to forming images of patient anatomy from Compton-scattered photons during lung SBRT is presented. To investigate the potential of scatter imaging, a pinhole collimator and flat-panel detector are used for spatial localization and detection of photons scattered during external beam therapy using lung SBRT treatment conditions (6 MV FFF beam). MCNP Monte Carlo software is used to develop a model to simulate scatter images. This model is validated by comparing experimental and simulated phantom images. Patient scatter images are then simulated from 4DCT data. Experimental lung tumor phantom images have sufficient contrast-to-noise to visualize the tumor with as few as 10 MU (0.5 s temporal resolution). The relative signal intensity from objects of different composition as well as lung tumor contrast for simulated phantom images agree quantitatively with experimental images, thus validating the Monte Carlo model. Scatter images are shown to display high contrast between different materials (lung, water, bone). Simulated patient images show superior (~double) tumor contrast compared to MV transmission images. Compton scatter imaging is a promising modality for directly imaging patient anatomy during treatment without additional radiation, and it has the potential to complement existing technologies and aid tumor tracking and lung SBRT image guidance. © 2018 American Association of Physicists in Medicine.

  15. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    Science.gov (United States)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual

  16. A simulation study of high-resolution x-ray computed tomography imaging using irregular sampling with a photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2013-01-01

    The purpose of this study was to improve the spatial resolution for the x-ray computed tomography (CT) imaging with a photon-counting detector using an irregular sampling method. The geometric shift-model of detector was proposed to produce the irregular sampling pattern and increase the number of samplings in the radial direction. The conventional micro-x-ray CT system and the novel system with the geometric shift-model of detector were simulated using analytic and Monte Carlo simulations. The projections were reconstructed using filtered back-projection (FBP), algebraic reconstruction technique (ART), and total variation (TV) minimization algorithms, and the reconstructed images were compared in terms of normalized root-mean-square error (NRMSE), full-width at half-maximum (FWHM), and coefficient-of-variation (COV). The results showed that the image quality improved in the novel system with the geometric shift-model of detector, and the NRMSE, FWHM, and COV were lower for the images reconstructed using the TV minimization technique in the novel system with the geometric shift-model of detector. The irregular sampling method produced by the geometric shift-model of detector can improve the spatial resolution and reduce artifacts and noise for reconstructed images obtained from an x-ray CT system with a photon-counting detector. -- Highlights: • We proposed a novel sampling method based on a spiral pattern to improve the spatial resolution. • The novel sampling method increased the number of samplings in the radial direction. • The spatial resolution was improved by the novel sampling method

  17. Overview of IMAGE 2.0. An integrated model of climate change and the global environment

    International Nuclear Information System (INIS)

    Alcamo, J.; Battjes, C.; Van den Born, G.J.; Bouwman, A.F.; De Haan, B.J.; Klein Goldewijk, K.; Klepper, O.; Kreileman, G.J.J.; Krol, M.; Leemans, R.; Van Minnen, J.G.; Olivier, J.G.J.; De Vries, H.J.M.; Toet, A.M.C.; Van den Wijngaart, R.A.; Van der Woerd, H.J.; Zuidema, G.

    1995-01-01

    The IMAGE 2.0 model is a multi-disciplinary, integrated model, designed to simulate the dynamics of the global society-biosphere-climate system. In this paper the focus is on the scientific aspects of the model, while another paper in this volume emphasizes its political aspects. The objectives of IMAGE 2.0 are to investigate linkages and feedbacks in the global system, and to evaluate consequences of climate policies. Dynamic calculations are performed to the year 2100, with a spatial scale ranging from grid (0.5x0.5 latitude-longitude) to world political regions, depending on the sub-model. A total of 13 sub-models make up IMAGE 2.0, and they are organized into three fully linked sub-systems: Energy-Industry, Terrestrial Environment, and Atmosphere-Ocean. The fully linked model has been tested against data from 1970 to 1990, and after calibration it can reproduce the following observed trends: regional energy consumption and energy-related emissions, terrestrial flux of carbon dioxide and emissions of greenhouse gases, concentrations of greenhouse gases in the atmosphere, and transformation of land cover. The model can also simulate current zonal average surface and vertical temperatures. 1 fig., 10 refs

  18. Tracking boundary movement and exterior shape modelling in lung EIT imaging

    International Nuclear Information System (INIS)

    Biguri, A; Soleimani, M; Grychtol, B; Adler, A

    2015-01-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT. (paper)

  19. Tracking boundary movement and exterior shape modelling in lung EIT imaging.

    Science.gov (United States)

    Biguri, A; Grychtol, B; Adler, A; Soleimani, M

    2015-06-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.

  20. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  1. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    Science.gov (United States)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  2. Time domain SAR raw data simulation using CST and image focusing of 3D objects

    Science.gov (United States)

    Saeed, Adnan; Hellwich, Olaf

    2017-10-01

    This paper presents the use of a general purpose electromagnetic simulator, CST, to simulate realistic synthetic aperture radar (SAR) raw data of three-dimensional objects. Raw data is later focused in MATLAB using range-doppler algorithm. Within CST Microwave Studio a replica of TerraSAR-X chirp signal is incident upon a modeled Corner Reflector (CR) whose design and material properties are identical to that of the real one. Defining mesh and other appropriate settings reflected wave is measured at several distant points within a line parallel to the viewing direction. This is analogous to an array antenna and is synthesized to create a long aperture for SAR processing. The time domain solver in CST is based on the solution of differential form of Maxwells equations. Exported data from CST is arranged into a 2-d matrix of axis range and azimuth. Hilbert transform is applied to convert the real signal to complex data with phase information. Range compression, range cell migration correction (RCMC), and azimuth compression are applied in time domain to obtain the final SAR image. This simulation can provide valuable information to clarify which real world objects cause images suitable for high accuracy identification in the SAR images.

  3. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  4. Digital design and fabrication of simulation model for measuring orthodontic force.

    Science.gov (United States)

    Liu, Yun-Feng; Zhang, Peng-Yuan; Zhang, Qiao-Fang; Zhang, Jian-Xing; Chen, Jie

    2014-01-01

    Three dimensional (3D) forces are the key factors for determining movement of teeth during orthodontic treatment. Designing precise forces and torques on tooth before treatment can result accurate tooth movements, but it is too difficult to realize. In orthodontic biomechanical systems, the periodontal tissues, including bones, teeth, and periodontal ligaments (PDL), are affected by braces, and measuring the forces applied on the teeth by braces should be based on a simulated model composed of these three types of tissues. This study explores the design and fabrication of a simulated oral model for 3D orthodontic force measurements. Based on medical image processing, tissue reconstruction, 3D printing, and PDL simulation and testing, a model for measuring force was designed and fabricated, which can potentially be used for force prediction, design of treatment plans, and precise clinical operation. The experiment illustrated that bi-component silicones with 2:8 ratios had similar mechanical properties to PDL, and with a positioning guide, the teeth were assembled in the mandible sockets accurately, and so a customized oral model for 3D orthodontic force measurement was created.

  5. A synthetic study on constaining a 2D density-dependent saltwater intrusion model using electrical imaging data

    DEFF Research Database (Denmark)

    Antonsson, Arni Valur; Nguyen, Frederic; Engesgaard, Peter Knudegaard

    of the synthetic model, basically a salinity distribution in the coastal aquifer, was converted to resistivity distribution by assuming a certain petrophysical relation between water salinity and electrical conductivity. The obtained resistivity distribution was then used when electrical data acquisition...... was simulated. By applying an advanced inversion approach, electrical images of resistivity were obtained and based on the assumed petrophysical model the salinity distribution was derived. A number of different intrusion simulations were conducted with the aim of assessing the applicability of the method under....... Compared to conventional methods, which only give (few) point information, electrical images can give data over large spatial distances but that can be of great value for groundwater modeling purposes. The aim of this study is to investigate in a synthetic way, the applicability of using electrical images...

  6. ANALYSIS OF SPECTRAL CHARACTERISTICS AMONG DIFFERENT SENSORS BY USE OF SIMULATED RS IMAGES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This research, by use of RS image-simulating method, simulated apparent reflectance images at sensor level and ground-reflectance images of SPOT-HRV,CBERS-CCD,Landsat-TM and NOAA14-AVHRR' s corresponding bands. These images were used to analyze sensor's differences caused by spectral sensitivity and atmospheric impacts. The differences were analyzed on Normalized Difference Vegetation Index(NDVI). The results showed that the differences of sensors' spectral characteristics cause changes of their NDVI and reflectance. When multiple sensors' data are applied to digital analysis, the error should be taken into account. Atmospheric effect makes NDVI smaller, and atn~pheric correction has the tendency of increasing NDVI values. The reflectance and their NDVIs of different sensors can be used to analyze the differences among sensor' s features. The spectral analysis method based on RS simulated images can provide a new way to design the spectral characteristics of new sensors.

  7. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  8. Software for simulation of a computed tomography imaging spectrometer using optical design software

    Science.gov (United States)

    Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.

    2000-11-01

    Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.

  9. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  10. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    International Nuclear Information System (INIS)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-01-01

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  11. Image simulation and assessment of the colour and spatial capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    Science.gov (United States)

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicolas; Caudill, Christy M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-01-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally (i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ∼18–36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18–36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three “fully”-simulated image cubes of thirty unique locations on Mars (i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that

  12. Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging Data to Constrain a Positron Emission Tomography Kinetic Model: Theory and Simulations

    Directory of Open Access Journals (Sweden)

    Jacob U. Fluckiger

    2013-01-01

    Full Text Available We show how dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI data can constrain a compartmental model for analyzing dynamic positron emission tomography (PET data. We first develop the theory that enables the use of DCE-MRI data to separate whole tissue time activity curves (TACs available from dynamic PET data into individual TACs associated with the blood space, the extravascular-extracellular space (EES, and the extravascular-intracellular space (EIS. Then we simulate whole tissue TACs over a range of physiologically relevant kinetic parameter values and show that using appropriate DCE-MRI data can separate the PET TAC into the three components with accuracy that is noise dependent. The simulations show that accurate blood, EES, and EIS TACs can be obtained as evidenced by concordance correlation coefficients >0.9 between the true and estimated TACs. Additionally, provided that the estimated DCE-MRI parameters are within 10% of their true values, the errors in the PET kinetic parameters are within approximately 20% of their true values. The parameters returned by this approach may provide new information on the transport of a tracer in a variety of dynamic PET studies.

  13. Optimisation of the imaging and dosimetric characteristics of an electronic portal imaging device employing plastic scintillating fibres using Monte Carlo simulations.

    Science.gov (United States)

    Blake, S J; McNamara, A L; Vial, P; Holloway, L; Kuncic, Z

    2014-11-21

    A Monte Carlo model of a novel electronic portal imaging device (EPID) has been developed using Geant4 and its performance for imaging and dosimetry applications in radiotherapy has been characterised. The EPID geometry is based on a physical prototype under ongoing investigation and comprises an array of plastic scintillating fibres in place of the metal plate/phosphor screen in standard EPIDs. Geometrical and optical transport parameters were varied to investigate their impact on imaging and dosimetry performance. Detection efficiency was most sensitive to variations in fibre length, achieving a peak value of 36% at 50 mm using 400 keV x-rays for the lengths considered. Increases in efficiency for longer fibres were partially offset by reductions in sensitivity. Removing the extra-mural absorber surrounding individual fibres severely decreased the modulation transfer function (MTF), highlighting its importance in maximising spatial resolution. Field size response and relative dose profile simulations demonstrated a water-equivalent dose response and thus the prototype's suitability for dosimetry applications. Element-to-element mismatch between scintillating fibres and underlying photodiode pixels resulted in a reduced MTF for high spatial frequencies and quasi-periodic variations in dose profile response. This effect is eliminated when fibres are precisely matched to underlying pixels. Simulations strongly suggest that with further optimisation, this prototype EPID may be capable of simultaneous imaging and dosimetry in radiotherapy.

  14. Optimisation of the imaging and dosimetric characteristics of an electronic portal imaging device employing plastic scintillating fibres using Monte Carlo simulations

    Science.gov (United States)

    Blake, S. J.; McNamara, A. L.; Vial, P.; Holloway, L.; Kuncic, Z.

    2014-11-01

    A Monte Carlo model of a novel electronic portal imaging device (EPID) has been developed using Geant4 and its performance for imaging and dosimetry applications in radiotherapy has been characterised. The EPID geometry is based on a physical prototype under ongoing investigation and comprises an array of plastic scintillating fibres in place of the metal plate/phosphor screen in standard EPIDs. Geometrical and optical transport parameters were varied to investigate their impact on imaging and dosimetry performance. Detection efficiency was most sensitive to variations in fibre length, achieving a peak value of 36% at 50 mm using 400 keV x-rays for the lengths considered. Increases in efficiency for longer fibres were partially offset by reductions in sensitivity. Removing the extra-mural absorber surrounding individual fibres severely decreased the modulation transfer function (MTF), highlighting its importance in maximising spatial resolution. Field size response and relative dose profile simulations demonstrated a water-equivalent dose response and thus the prototype’s suitability for dosimetry applications. Element-to-element mismatch between scintillating fibres and underlying photodiode pixels resulted in a reduced MTF for high spatial frequencies and quasi-periodic variations in dose profile response. This effect is eliminated when fibres are precisely matched to underlying pixels. Simulations strongly suggest that with further optimisation, this prototype EPID may be capable of simultaneous imaging and dosimetry in radiotherapy.

  15. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  16. Research for correction pre-operative MRI images of brain during operation using particle method simulation

    International Nuclear Information System (INIS)

    Shino, Ryosaku; Koshizuka, Seiichi; Sakai, Mikio; Ito, Hirotaka; Iseki, Hiroshi; Muragaki, Yoshihiro

    2010-01-01

    In the neurosurgical procedures, surgeon formulates a surgery plan based on pre-operative images such as MRI. However, the brain is transformed by removal of the affected area. In this paper, we propose a method for reconstructing pre-operative images involving the deformation with physical simulation. First, the domain of brain is identified in pre-operative images. Second, we create particles for physical simulation. Then, we carry out the linear elastic simulation taking into account the gravity. Finally, we reconstruct pre-operative images with deformation according to movement of the particles. We show the effectiveness of this method by reconstructing the pre-operative image actually taken before surgery. (author)

  17. 3D Rapid Prototyping for Otolaryngology-Head and Neck Surgery: Applications in Image-Guidance, Surgical Simulation and Patient-Specific Modeling.

    Directory of Open Access Journals (Sweden)

    Harley H L Chan

    Full Text Available The aim of this study was to demonstrate the role of advanced fabrication technology across a broad spectrum of head and neck surgical procedures, including applications in endoscopic sinus surgery, skull base surgery, and maxillofacial reconstruction. The initial case studies demonstrated three applications of rapid prototyping technology are in head and neck surgery: i a mono-material paranasal sinus phantom for endoscopy training ii a multi-material skull base simulator and iii 3D patient-specific mandible templates. Digital processing of these phantoms is based on real patient or cadaveric 3D images such as CT or MRI data. Three endoscopic sinus surgeons examined the realism of the endoscopist training phantom. One experienced endoscopic skull base surgeon conducted advanced sinus procedures on the high-fidelity multi-material skull base simulator. Ten patients participated in a prospective clinical study examining patient-specific modeling for mandibular reconstructive surgery. Qualitative feedback to assess the realism of the endoscopy training phantom and high-fidelity multi-material phantom was acquired. Conformance comparisons using assessments from the blinded reconstructive surgeons measured the geometric performance between intra-operative and pre-operative reconstruction mandible plates. Both the endoscopy training phantom and the high-fidelity multi-material phantom received positive feedback on the realistic structure of the phantom models. Results suggested further improvement on the soft tissue structure of the phantom models is necessary. In the patient-specific mandible template study, the pre-operative plates were judged by two blinded surgeons as providing optimal conformance in 7 out of 10 cases. No statistical differences were found in plate fabrication time and conformance, with pre-operative plating providing the advantage of reducing time spent in the operation room. The applicability of common model design and

  18. 3D Rapid Prototyping for Otolaryngology—Head and Neck Surgery: Applications in Image-Guidance, Surgical Simulation and Patient-Specific Modeling

    Science.gov (United States)

    Chan, Harley H. L.; Siewerdsen, Jeffrey H.; Vescan, Allan; Daly, Michael J.; Prisman, Eitan; Irish, Jonathan C.

    2015-01-01

    The aim of this study was to demonstrate the role of advanced fabrication technology across a broad spectrum of head and neck surgical procedures, including applications in endoscopic sinus surgery, skull base surgery, and maxillofacial reconstruction. The initial case studies demonstrated three applications of rapid prototyping technology are in head and neck surgery: i) a mono-material paranasal sinus phantom for endoscopy training ii) a multi-material skull base simulator and iii) 3D patient-specific mandible templates. Digital processing of these phantoms is based on real patient or cadaveric 3D images such as CT or MRI data. Three endoscopic sinus surgeons examined the realism of the endoscopist training phantom. One experienced endoscopic skull base surgeon conducted advanced sinus procedures on the high-fidelity multi-material skull base simulator. Ten patients participated in a prospective clinical study examining patient-specific modeling for mandibular reconstructive surgery. Qualitative feedback to assess the realism of the endoscopy training phantom and high-fidelity multi-material phantom was acquired. Conformance comparisons using assessments from the blinded reconstructive surgeons measured the geometric performance between intra-operative and pre-operative reconstruction mandible plates. Both the endoscopy training phantom and the high-fidelity multi-material phantom received positive feedback on the realistic structure of the phantom models. Results suggested further improvement on the soft tissue structure of the phantom models is necessary. In the patient-specific mandible template study, the pre-operative plates were judged by two blinded surgeons as providing optimal conformance in 7 out of 10 cases. No statistical differences were found in plate fabrication time and conformance, with pre-operative plating providing the advantage of reducing time spent in the operation room. The applicability of common model design and fabrication techniques

  19. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  20. The simulation of an imaging gamma-ray Compton backscattering device using GEANT4

    International Nuclear Information System (INIS)

    Flechas, D.; Cristancho, F.; Sarmiento, L.G.; Fajardo, E.

    2014-01-01

    A gamma-backscattering imaging device dubbed Compton Camera, developed at GSI (Darmstadt, Germany) and modified and studied at the Nuclear Physics Group of the National University of Colombia in Bogota, uses the back-to-back emission of two gamma rays in the positron annihilation to construct a bidimensional image that represents the distribution of matter in the field-of-view of the camera. This imaging capability can be used in a host of different situations, for example, to identify and study deposition and structural defects, and to help locating concealed objects, to name just two cases. In order to increase the understanding of the response of the Compton Camera and, in particular, its image formation process, and to assist in the data analysis, a simulation of the camera was developed using the GEANT4 simulation toolkit. In this work, the images resulting from different experimental conditions are shown. The simulated images and their comparison with the experimental ones already suggest methods to improve the present experimental device. (author)

  1. Electronic structure and simulated STM images of non-honeycomb phosphorene allotropes

    Science.gov (United States)

    Kaur, Sumandeep; Kumar, Ashok; Srivastava, Sunita; Tankeshwar, K.

    2018-04-01

    We have investigated the electronic structure and simulated STM images of various non-honeycomb allotropes of phosphorene namely ɛ - P, ζ - P, η - P and θ - P, within combined density functional theory and Tersoff-Hamman approach. All these allotropes are found to be energetically stable and electronically semiconductingwith bandgap ranging between 0.5-1.2 eV. Simulated STM images show distinctly different features in terms of the topography. Different maximas in the distance-height profile indicates the difference in buckling of atoms in these allotropes. Distinctly different images obtained in this study may be useful to differentiate various allotropes that can serve as fingerprints to identify various allotropes during the synthesis of phosphorene.

  2. Image-based modeling of flow and reactive transport in porous media

    Science.gov (United States)

    Qin, Chao-Zhong; Hoang, Tuong; Verhoosel, Clemens V.; Harald van Brummelen, E.; Wijshoff, Herman M. A.

    2017-04-01

    Due to the availability of powerful computational resources and high-resolution acquisition of material structures, image-based modeling has become an important tool in studying pore-scale flow and transport processes in porous media [Scheibe et al., 2015]. It is also playing an important role in the upscaling study for developing macroscale porous media models. Usually, the pore structure of a porous medium is directly discretized by the voxels obtained from visualization techniques (e.g. micro CT scanning), which can avoid the complex generation of computational mesh. However, this discretization may considerably overestimate the interfacial areas between solid walls and pore spaces. As a result, it could impact the numerical predictions of reactive transport and immiscible two-phase flow. In this work, two types of image-based models are used to study single-phase flow and reactive transport in a porous medium of sintered glass beads. One model is from a well-established voxel-based simulation tool. The other is based on the mixed isogeometric finite cell method [Hoang et al., 2016], which has been implemented in the open source Nutils (http://www.nutils.org). The finite cell method can be used in combination with isogeometric analysis to enable the higher-order discretization of problems on complex volumetric domains. A particularly interesting application of this immersed simulation technique is image-based analysis, where the geometry is smoothly approximated by segmentation of a B-spline level set approximation of scan data [Verhoosel et al., 2015]. Through a number of case studies by the two models, we will show the advantages and disadvantages of each model in modeling single-phase flow and reactive transport in porous media. Particularly, we will highlight the importance of preserving high-resolution interfaces between solid walls and pore spaces in image-based modeling of porous media. References Hoang, T., C. V. Verhoosel, F. Auricchio, E. H. van

  3. Imaging properties of the light sword optical element used as a contact lens in a presbyopic eye model.

    Science.gov (United States)

    Petelczyc, K; Bará, S; Lopez, A Ciro; Jaroszewicz, Z; Kakarenko, K; Kolodziejczyk, A; Sypek, M

    2011-12-05

    The paper analyzes the imaging properties of the light sword optical element (LSOE) applied as a contact lens to the presbyopic human eye. We performed our studies with a human eye model based on the Gullstrand parameterization. In order to quantify the discussion concerning imaging with extended depth of focus, we introduced quantitative parameters characterizing output images of optotypes obtained in numerical simulations. The quality of the images formed by the LSOE were compared with those created by a presbyopic human eye, reading glasses and a quartic inverse axicon. Then we complemented the numerical results by an experiment where a 3D scene was imaged by means of the refractive LSOE correcting an artificial eye based on the Gullstrand model. According to performed simulations and experiments the LSOE exhibits abilities for presbyopia correction in a wide range of functional vision distances.

  4. Utilizing native fluorescence imaging, modeling and simulation to examine pharmacokinetics and therapeutic regimen of a novel anticancer prodrug

    International Nuclear Information System (INIS)

    Wang, Jing-Hung; Endsley, Aaron N.; Green, Carol E.; Matin, A. C.

    2016-01-01

    Success of cancer prodrugs relying on a foreign gene requires specific delivery of the gene to the cancer, and improvements such as higher level gene transfer and expression. Attaining these objectives will be facilitated in preclinical studies using our newly discovered CNOB-GDEPT, consisting of the produrg: 6-chloro-9-nitro-5-oxo-5H-benzo-(a)-phenoxazine (CNOB) and its activating enzyme ChrR6, which generates the cytotoxic product 9-amino-6-chloro-5H-benzo[a]phenoxazine-5-one (MCHB). MCHB is fluorescent and can be noninvasively imaged in mice, and here we investigated whether MCHB fluorescence quantitatively reflects its concentration, as this would enhance its reporter value in further development of the CNOB-GDEPT therapeutic regimen. PK parameters were estimated and used to predict more effective CNOB administration schedules. CNOB (3.3 mg/kg) was injected iv in mice implanted with humanized ChrR6 (HChrR6)-expressing 4T1 tumors. Fluorescence was imaged in live mice using IVIS Spectrum, and quantified by Living Image 3.2 software. MCHB and CNOB were quantified also by LC/MS/MS analysis. We used non-compartmental model to estimate PK parameters. Phoenix WinNonlin software was used for simulations to predict a more effective CNOB dosage regimen. CNOB administration significantly prolonged mice survival. MCHB fluorescence quantitatively reflected its exposure levels to the tumor and the plasma, as verified by LC/MS/MS analysis at various time points, including at a low concentration of 2 ng/g tumor. The LC/MS/MS data were used to estimate peak plasma concentrations, exposure (AUC 0-24 ), volume of distribution, clearance and half-life in plasma and the tumor. Simulations suggested that the CNOB-GDEPT can be a successful therapy without large increases in the prodrug dosage. MCHB fluorescence quantifies this drug, and CNOB can be effective at relatively low doses. MCHB fluorescence characteristics will expedite further development of CNOB-GDEPT by, for example

  5. Simulation of spatiotemporal CT data sets using a 4D MRI-based lung motion model.

    Science.gov (United States)

    Marx, Mirko; Ehrhardt, Jan; Werner, René; Schlemmer, Heinz-Peter; Handels, Heinz

    2014-05-01

    Four-dimensional CT imaging is widely used to account for motion-related effects during radiotherapy planning of lung cancer patients. However, 4D CT often contains motion artifacts, cannot be used to measure motion variability, and leads to higher dose exposure. In this article, we propose using 4D MRI to acquire motion information for the radiotherapy planning process. From the 4D MRI images, we derive a time-continuous model of the average patient-specific respiratory motion, which is then applied to simulate 4D CT data based on a static 3D CT. The idea of the motion model is to represent the average lung motion over a respiratory cycle by cyclic B-spline curves. The model generation consists of motion field estimation in the 4D MRI data by nonlinear registration, assigning respiratory phases to the motion fields, and applying a B-spline approximation on a voxel-by-voxel basis to describe the average voxel motion over a breathing cycle. To simulate a patient-specific 4D CT based on a static CT of the patient, a multi-modal registration strategy is introduced to transfer the motion model from MRI to the static CT coordinates. Differences between model-based estimated and measured motion vectors are on average 1.39 mm for amplitude-based binning of the 4D MRI data of three patients. In addition, the MRI-to-CT registration strategy is shown to be suitable for the model transformation. The application of our 4D MRI-based motion model for simulating 4D CT images provides advantages over standard 4D CT (less motion artifacts, radiation-free). This makes it interesting for radiotherapy planning.

  6. Evaluation of 3D modality-independent elastography for breast imaging: a simulation study

    International Nuclear Information System (INIS)

    Ou, J J; Ong, R E; Yankeelov, T E; Miga, M I

    2008-01-01

    This paper reports on the development and preliminary testing of a three-dimensional implementation of an inverse problem technique for extracting soft-tissue elasticity information via non-rigid model-based image registration. The modality-independent elastography (MIE) algorithm adjusts the elastic properties of a biomechanical model to achieve maximal similarity between images acquired under different states of static loading. A series of simulation experiments with clinical image sets of human breasts were performed to test the ability of the method to identify and characterize a radiographically occult stiff lesion. Because boundary conditions are a critical input to the algorithm, a comparison of three methods for semi-automated surface point correspondence was conducted in the context of systematic and randomized noise processes. The results illustrate that 3D MIE was able to successfully reconstruct elasticity images using data obtained from both magnetic resonance and x-ray computed tomography systems. The lesion was localized correctly in all cases and its relative elasticity found to be reasonably close to the true values (3.5% with the use of spatial priors and 11.6% without). In addition, the inaccuracies of surface registration performed with thin-plate spline interpolation did not exceed empiric thresholds of unacceptable boundary condition error

  7. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  8. Minimizing EIT image artefacts from mesh variability in finite element models.

    Science.gov (United States)

    Adler, Andy; Lionheart, William R B

    2011-07-01

    Electrical impedance tomography (EIT) solves an inverse problem to estimate the conductivity distribution within a body from electrical simulation and measurements at the body surface, where the inverse problem is based on a solution of Laplace's equation in the body. Most commonly, a finite element model (FEM) is used, largely because of its ability to describe irregular body shapes. In this paper, we show that simulated variations in the positions of internal nodes within a FEM can result in serious image artefacts in the reconstructed images. Such variations occur when designing FEM meshes to conform to conductivity targets, but the effects may also be seen in other applications of absolute and difference EIT. We explore the hypothesis that these artefacts result from changes in the projection of the anisotropic conductivity tensor onto the FEM system matrix, which introduces anisotropic components into the simulated voltages, which cannot be reconstructed onto an isotropic image, and appear as artefacts. The magnitude of the anisotropic effect is analysed for a small regular FEM, and shown to be proportional to the relative node movement as a fraction of element size. In order to address this problem, we show that it is possible to incorporate a FEM node movement component into the formulation of the inverse problem. These results suggest that it is important to consider artefacts due to FEM mesh geometry in EIT image reconstruction.

  9. Mammogram synthesis using a three-dimensional simulation. III. Modeling and evaluation of the breast ductal network

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2003-01-01

    A method is proposed for realistic simulation of the breast ductal network as part of a computer three-dimensional (3-D) breast phantom. The ductal network is simulated using tree models. Synthetic trees are generated based upon a description of ductal branching by ramification matrices (R matrices), whose elements represent the probabilities of branching at various levels of a tree. We simulated the ductal network of the breast, consisting of multiple lobes, by random binary trees (RBT). Each lobe extends from the ampulla and consists of branching ductal segments of decreasing size, and the associated terminal ductal-lobular units. The lobes follow curved paths that project from the nipple toward the chest wall. We have evaluated the RBT model by comparing manually- traced ductal networks from 25 projections of ductal lobes in clinical galactograms and manually- traced networks from 23 projections of synthetic RBTs. A root-mean-square (rms) fractional error of 41%, between the R-matrix elements corresponding to clinical and synthetic images, was computed. This difference was influenced by projection and segmentation artifacts and by the limited number of available images. In addition, we analyzed 23 synthetic trees generated using R matrices computed from clinical images. A comparison of these synthetic and clinical images yielded a rms fractional error of 11%, suggesting the possibility that a more appropriate model of the ductal branching morphology may be developed. Rejection of the RBT model also suggests the existence of a relationship between ductal branching morphology and the state of mammary development and pathology

  10. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  11. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  12. The simulation of 3D mass models in 2D digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Shaheen, Eman, E-mail: eman.shaheen@uzleuven.be; De Keyzer, Frederik; Bosmans, Hilde; Ongeval, Chantal Van [Department of Radiology, University Hospitals Leuven, Herestraat 49, 3000 Leuven (Belgium); Dance, David R.; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford GU2 7XX, United Kingdom and Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2014-08-15

    Purpose: This work proposes a new method of building 3D breast mass models with different morphological shapes and describes the validation of the realism of their appearance after simulation into 2D digital mammograms and breast tomosynthesis images. Methods: Twenty-five contrast enhanced MRI breast lesions were collected and each mass was manually segmented in the three orthogonal views: sagittal, coronal, and transversal. The segmented models were combined, resampled to have isotropic voxel sizes, triangularly meshed, and scaled to different sizes. These masses were referred to as nonspiculated masses and were then used as nuclei onto which spicules were grown with an iterative branching algorithm forming a total of 30 spiculated masses. These 55 mass models were projected into 2D projection images to obtain mammograms after image processing and into tomographic sequences of projection images, which were then reconstructed to form 3D tomosynthesis datasets. The realism of the appearance of these mass models was assessed by five radiologists via receiver operating characteristic (ROC) analysis when compared to 54 real masses. All lesions were also given a breast imaging reporting and data system (BIRADS) score. The data sets of 2D mammography and tomosynthesis were read separately. The Kendall's coefficient of concordance was used for the interrater observer agreement assessment for the BIRADS scores per modality. Further paired analysis, using the Wilcoxon signed rank test, of the BIRADS assessment between 2D and tomosynthesis was separately performed for the real masses and for the simulated masses. Results: The area under the ROC curves, averaged over all observers, was 0.54 (95% confidence interval [0.50, 0.66]) for the 2D study, and 0.67 (95% confidence interval [0.55, 0.79]) for the tomosynthesis study. According to the BIRADS scores, the nonspiculated and the spiculated masses varied in their degrees of malignancy from normal (BIRADS 1) to highly

  13. The simulation of 3D mass models in 2D digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Shaheen, Eman; De Keyzer, Frederik; Bosmans, Hilde; Ongeval, Chantal Van; Dance, David R.; Young, Kenneth C.

    2014-01-01

    Purpose: This work proposes a new method of building 3D breast mass models with different morphological shapes and describes the validation of the realism of their appearance after simulation into 2D digital mammograms and breast tomosynthesis images. Methods: Twenty-five contrast enhanced MRI breast lesions were collected and each mass was manually segmented in the three orthogonal views: sagittal, coronal, and transversal. The segmented models were combined, resampled to have isotropic voxel sizes, triangularly meshed, and scaled to different sizes. These masses were referred to as nonspiculated masses and were then used as nuclei onto which spicules were grown with an iterative branching algorithm forming a total of 30 spiculated masses. These 55 mass models were projected into 2D projection images to obtain mammograms after image processing and into tomographic sequences of projection images, which were then reconstructed to form 3D tomosynthesis datasets. The realism of the appearance of these mass models was assessed by five radiologists via receiver operating characteristic (ROC) analysis when compared to 54 real masses. All lesions were also given a breast imaging reporting and data system (BIRADS) score. The data sets of 2D mammography and tomosynthesis were read separately. The Kendall's coefficient of concordance was used for the interrater observer agreement assessment for the BIRADS scores per modality. Further paired analysis, using the Wilcoxon signed rank test, of the BIRADS assessment between 2D and tomosynthesis was separately performed for the real masses and for the simulated masses. Results: The area under the ROC curves, averaged over all observers, was 0.54 (95% confidence interval [0.50, 0.66]) for the 2D study, and 0.67 (95% confidence interval [0.55, 0.79]) for the tomosynthesis study. According to the BIRADS scores, the nonspiculated and the spiculated masses varied in their degrees of malignancy from normal (BIRADS 1) to highly

  14. The simulation of 3D mass models in 2D digital mammography and breast tomosynthesis.

    Science.gov (United States)

    Shaheen, Eman; De Keyzer, Frederik; Bosmans, Hilde; Dance, David R; Young, Kenneth C; Van Ongeval, Chantal

    2014-08-01

    This work proposes a new method of building 3D breast mass models with different morphological shapes and describes the validation of the realism of their appearance after simulation into 2D digital mammograms and breast tomosynthesis images. Twenty-five contrast enhanced MRI breast lesions were collected and each mass was manually segmented in the three orthogonal views: sagittal, coronal, and transversal. The segmented models were combined, resampled to have isotropic voxel sizes, triangularly meshed, and scaled to different sizes. These masses were referred to as nonspiculated masses and were then used as nuclei onto which spicules were grown with an iterative branching algorithm forming a total of 30 spiculated masses. These 55 mass models were projected into 2D projection images to obtain mammograms after image processing and into tomographic sequences of projection images, which were then reconstructed to form 3D tomosynthesis datasets. The realism of the appearance of these mass models was assessed by five radiologists via receiver operating characteristic (ROC) analysis when compared to 54 real masses. All lesions were also given a breast imaging reporting and data system (BIRADS) score. The data sets of 2D mammography and tomosynthesis were read separately. The Kendall's coefficient of concordance was used for the interrater observer agreement assessment for the BIRADS scores per modality. Further paired analysis, using the Wilcoxon signed rank test, of the BIRADS assessment between 2D and tomosynthesis was separately performed for the real masses and for the simulated masses. The area under the ROC curves, averaged over all observers, was 0.54 (95% confidence interval [0.50, 0.66]) for the 2D study, and 0.67 (95% confidence interval [0.55, 0.79]) for the tomosynthesis study. According to the BIRADS scores, the nonspiculated and the spiculated masses varied in their degrees of malignancy from normal (BIRADS 1) to highly suggestive for malignancy (BIRADS 5

  15. CIMI simulations with recently developed multi-parameter chorus and plasmaspheric hiss models

    Science.gov (United States)

    Aryan, Homayon; Sibeck, David; Kang, Suk-bin; Balikhin, Michael; Fok, Mei-ching

    2017-04-01

    Simulation studies of the Earth's radiation belts are very useful in understanding the acceleration and loss of energetic particles. The Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model considers the effects of the ring current and plasmasphere on the radiation belts. CIMI was formed by merging the Comprehensive Ring Current Model (CRCM) and the Radiation Belt Environment (RBE) model to solves for many essential quantities in the inner magnetosphere, including radiation belt enhancements and dropouts. It incorporates chorus and plasmaspheric hiss wave diffusion of energetic electrons in energy, pitch angle, and cross terms. Usually the chorus and plasmaspheric hiss models used in CIMI are based on single-parameter geomagnetic index (AE). Here we integrate recently developed multi-parameter chorus and plasmaspheric hiss wave models based on geomagnetic index and solar wind parameters. We then perform CIMI simulations for different storms and compare the results with data from the Van Allen Probes and the Two Wide-angle Imaging Neutral-atom Spectrometers and Akebono satellites. We find that the CIMI simulations with multi-parameter chorus and plasmaspheric hiss wave models are more comparable to data than the single-parameter wave models.

  16. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    International Nuclear Information System (INIS)

    Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J.; Cho, Seungryong; Cho, Byungchul

    2016-01-01

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  17. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyekyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea and Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736 (Korea, Republic of); Poulsen, Per Rugaard [Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Cho, Seungryong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141 (Korea, Republic of); Cho, Byungchul, E-mail: cho.byungchul@gmail.com, E-mail: bcho@amc.seoul.kr [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505 (Korea, Republic of)

    2016-08-15

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  18. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  19. Are Live Ultrasound Models Replaceable? Traditional vs. Simulated Education Module for FAST

    Directory of Open Access Journals (Sweden)

    Suzanne Bentley

    2015-10-01

    Full Text Available Introduction: The focused assessment with sonography for trauma (FAST is a commonly used and life-saving tool in the initial assessment of trauma patients. The recommended emergency medicine (EM curriculum includes ultrasound and studies show the additional utility of ultrasound training for medical students. EM clerkships vary and often do not contain formal ultrasound instruction. Time constraints for facilitating lectures and hands-on learning of ultrasound are challenging. Limitations on didactics call for development and inclusion of novel educational strategies, such as simulation. The objective of this study was to compare the test, survey, and performance of ultrasound between medical students trained on an ultrasound simulator versus those trained via traditional, hands-on patient format. Methods: This was a prospective, blinded, controlled educational study focused on EM clerkship medical students. After all received a standardized lecture with pictorial demonstration of image acquisition, students were randomized into two groups: control group receiving traditional training method via practice on a human model and intervention group training via practice on an ultrasound simulator. Participants were tested and surveyed on indications and interpretation of FAST and training and confidence with image interpretation and acquisition before and after this educational activity. Evaluation of FAST skills was performed on a human model to emulate patient care and practical skills were scored via objective structured clinical examination (OSCE with critical action checklist. Results: There was no significant difference between control group (N=54 and intervention group (N=39 on pretest scores, prior ultrasound training/education, or ultrasound comfort level in general or on FAST. All students (N=93 showed significant improvement from pre- to post-test scores and significant improvement in comfort level using ultrasound in general and on FAST

  20. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  1. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    Science.gov (United States)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  2. High-resolution subject-specific mitral valve imaging and modeling: experimental and computational methods.

    Science.gov (United States)

    Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2016-12-01

    The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.

  3. Calibrating a hydraulic model using water levels derived from time series high-resolution Radarsat-2 synthetic aperture radar images and elevation data

    Science.gov (United States)

    Trudel, M.; Desrochers, N.; Leconte, R.

    2017-12-01

    Knowledge of water extent (WE) and level (WL) of rivers is necessary to calibrate and validate hydraulic models and thus to better simulate and forecast floods. Synthetic aperture radar (SAR) has demonstrated its potential for delineating water bodies, as backscattering of water is much lower than that of other natural surfaces. The ability of SAR to obtain information despite cloud cover makes it an interesting tool for temporal monitoring of water bodies. The delineation of WE combined with a high-resolution digital terrain model (DTM) allows extracting WL. However, most research using SAR data to calibrate hydraulic models has been carried out using one or two images. The objectives of this study is to use WL derived from time series high resolution Radarsat-2 SAR images for the calibration of a 1-D hydraulic model (HEC-RAS). Twenty high-resolution (5 m) Radarsat-2 images were acquired over a 40 km reach of the Athabasca River, in northern Alberta, Canada, between 2012 and 2016, covering both low and high flow regimes. A high-resolution (2m) DTM was generated combining information from LIDAR data and bathymetry acquired between 2008 and 2016 by boat surveying. The HEC-RAS model was implemented on the Athabasca River to simulate WL using cross-sections spaced by 100 m. An image histogram thresholding method was applied on each Radarsat-2 image to derive WE. WE were then compared against each cross-section to identify those were the slope of the banks is not too abrupt and therefore amenable to extract WL. 139 observations of WL at different locations along the river reach and with streamflow measurements were used to calibrate the HEC-RAS model. The RMSE between SAR-derived and simulated WL is under 0.35 m. Validation was performed using in situ observations of WL measured in 2008, 2012 and 2016. The RMSE between the simulated water levels calibrated with SAR images and in situ observations is less than 0.20 m. In addition, a critical success index (CSI) was

  4. Growth Simulation and Discrimination of Botrytis cinerea, Rhizopus stolonifer and Colletotrichum acutatum Using Hyperspectral Reflectance Imaging.

    Directory of Open Access Journals (Sweden)

    Ye Sun

    Full Text Available This research aimed to develop a rapid and nondestructive method to model the growth and discrimination of spoilage fungi, like Botrytis cinerea, Rhizopus stolonifer and Colletotrichum acutatum, based on hyperspectral imaging system (HIS. A hyperspectral imaging system was used to measure the spectral response of fungi inoculated on potato dextrose agar plates and stored at 28°C and 85% RH. The fungi were analyzed every 12 h over two days during growth, and optimal simulation models were built based on HIS parameters. The results showed that the coefficients of determination (R2 of simulation models for testing datasets were 0.7223 to 0.9914, and the sum square error (SSE and root mean square error (RMSE were in a range of 2.03-53.40×10(-4 and 0.011-0.756, respectively. The correlation coefficients between the HIS parameters and colony forming units of fungi were high from 0.887 to 0.957. In addition, fungi species was discriminated by partial least squares discrimination analysis (PLSDA, with the classification accuracy of 97.5% for the test dataset at 36 h. The application of this method in real food has been addressed through the analysis of Botrytis cinerea, Rhizopus stolonifer and Colletotrichum acutatum inoculated in peaches, demonstrating that the HIS technique was effective for simulation of fungal infection in real food. This paper supplied a new technique and useful information for further study into modeling the growth of fungi and detecting fruit spoilage caused by fungi based on HIS.

  5. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    Science.gov (United States)

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  6. Imaging Performance Analysis of Simbol-X with Simulations

    Science.gov (United States)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  7. Imaging Performance Analysis of Simbol-X with Simulations

    International Nuclear Information System (INIS)

    Chauvin, M.; Roques, J. P.

    2009-01-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  8. Development of a Novel Ultrasound-guided Peritonsillar Abscess Model for Simulation Training.

    Science.gov (United States)

    Ng, Vivienne; Plitt, Jennifer; Biffar, David

    2018-01-01

    Peritonsillar abscess (PTA) is the most common deep space infection of the head and neck presenting to emergency departments.1 No commercial PTA task trainer exists for simulation training. Thus, resident physicians often perform their first PTA needle aspiration in the clinical setting, knowing that carotid artery puncture and hemorrhage are serious and devastating complications. While several low-fidelity PTA task trainers have been previously described, none allow for ultrasound image acquisition.6-9 We sought to create a cost-effective and realistic task trainer that allows trainees to acquire both diagnostic ultrasound and needle aspiration skills while draining a peritonsillar abscess. We built the task trainer with low-cost, replaceable, and easily cleanable materials. A damaged airway headskin was repurposed to build the model. A mesh wire cylinder attached to a wooden base was fashioned to provide infrastructure. PTAs were simulated with a water and lotion solution inside a water balloon that was glued to the bottom of a paper cup. The balloon was fully submerged with ordnance gelatin to facilitate ultrasound image acquisition, and an asymmetric soft palate and deviated uvula were painted on top after setting. PTA cups were replaced after use. We spent eight hours constructing three task trainers and used 50 PTA cups for a total cost <$110. Forty-six emergency medicine (EM) residents performed PTA needle aspirations using the task trainers and were asked to rate ultrasound image realism, task trainer realism, and trainer ease of use on a five-point visual analog scale, with five being very realistic and easy. Sixteen of 46 (35%) residents completed the survey and reported that ultrasound images were representative of real PTAs (mean 3.41). They found the model realistic (mean 3.73) and easy to use (mean 4.08). Residents rated their comfort with the drainage procedure as 2.07 before and 3.64 after practicing on the trainer. This low-cost, easy

  9. Development of a Novel Ultrasound-guided Peritonsillar Abscess Model for Simulation Training

    Directory of Open Access Journals (Sweden)

    Vivienne Ng

    2017-12-01

    Full Text Available Introduction Peritonsillar abscess (PTA is the most common deep space infection of the head and neck presenting to emergency departments. 1 No commercial PTA task trainer exists for simulation training. Thus, resident physicians often perform their first PTA needle aspiration in the clinical setting, knowing that carotid artery puncture and hemorrhage are serious and devastating complications. While several low-fidelity PTA task trainers have been previously described, none allow for ultrasound image acquisition. 6– 9 We sought to create a cost-effective and realistic task trainer that allows trainees to acquire both diagnostic ultrasound and needle aspiration skills while draining a peritonsillar abscess. Methods We built the task trainer with low-cost, replaceable, and easily cleanable materials. A damaged airway headskin was repurposed to build the model. A mesh wire cylinder attached to a wooden base was fashioned to provide infrastructure. PTAs were simulated with a water and lotion solution inside a water balloon that was glued to the bottom of a paper cup. The balloon was fully submerged with ordnance gelatin to facilitate ultrasound image acquisition, and an asymmetric soft palate and deviated uvula were painted on top after setting. PTA cups were replaced after use. We spent eight hours constructing three task trainers and used 50 PTA cups for a total cost <$110. Results Forty-six emergency medicine (EM residents performed PTA needle aspirations using the task trainers and were asked to rate ultrasound image realism, task trainer realism, and trainer ease of use on a five-point visual analog scale, with five being very realistic and easy. Sixteen of 46 (35% residents completed the survey and reported that ultrasound images were representative of real PTAs (mean 3.41. They found the model realistic (mean 3.73 and easy to use (mean 4.08. Residents rated their comfort with the drainage procedure as 2.07 before and 3.64 after practicing

  10. Global options for biofuels from plantations according to IMAGE simulations

    International Nuclear Information System (INIS)

    Battjes, J.J.

    1994-07-01

    In this report the contribution of biofuels to the renewable energy supply and the transition towards it are discussed for the energy crops miscanthus, eucalyptus, poplar, wheat and sugar cane. Bio-electricity appears to be the most suitable option regarding energetic and financial aspects and in terms of avoided CO 2 emissions. The IMAGE 2.0 model is a multi-disciplinary, integrated model designed to simulate the dynamics of the global society-biosphere-climate system, and mainly used here for making more realistic estimates. Dynamic calculations are performed to the year 2100. An IMAGE 2.0-based Conventional Wisdom scenario simulates, among other things, future energy demand and supply, future food production, future land cover patterns and future greenhouse gas emissions. Two biofuel scenarios are described in this report. The first consists of growing energy crops on set asides. According to a 'Conventional Wisdom' scenario, Canada, the U.S. and Europe and to a lesser extent Latin America will experience set asides due to a declining demand in agricultural area. The second biofuel scenario consists of growing energy crops on set asides and on 10% of the agricultural area in the developing countries. Growing energy crops on all of the areas listed above leads to an energy production that consists of about 12% of the total non-renewable energy use in 2050, according to the 'Conventional Wisdom' scenario. Furthermore, the energy related CO 2 emissions are reduced with about 15% in 2050, compared to the Conventional Wisdom scenario. Financial aspects will have great influence on the success of growing energy crops. However, energy generated from biomass derived from plantations is currently more expensive than generating it from traditional fuels. Levying taxes on CO 2 emissions and giving subsidies to biofuels will reduce the cost price difference between fossil fuels and biofuels

  11. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  12. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns.

    Science.gov (United States)

    Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H

    2014-01-01

    A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.

  13. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  14. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    Science.gov (United States)

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Conversion of a Surface Model of a Structure of Interest into a Volume Model for Medical Image Retrieval

    Directory of Open Access Journals (Sweden)

    Sarmad ISTEPHAN

    2015-06-01

    Full Text Available Volumetric medical image datasets contain vital information for noninvasive diagnosis, treatment planning and prognosis. However, direct and unlimited query of such datasets is hindered due to the unstructured nature of the imaging data. This study is a step towards the unlimited query of medical image datasets by focusing on specific Structures of Interest (SOI. A requirement in achieving this objective is having both the surface and volume models of the SOI. However, typically, only the surface model is available. Therefore, this study focuses on creating a fast method to convert a surface model to a volume model. Three methods (1D, 2D and 3D are proposed and evaluated using simulated and real data of Deep Perisylvian Area (DPSA within the human brain. The 1D method takes 80 msec for DPSA model; about 4 times faster than 2D method and 7.4 fold faster than 3D method, with over 97% accuracy. The proposed 1D method is feasible for surface to volume conversion in computer aided diagnosis, treatment planning and prognosis systems containing large amounts of unstructured medical images.

  16. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  17. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    Science.gov (United States)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  18. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    Science.gov (United States)

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  19. Correction of electrode modelling errors in multi-frequency EIT imaging.

    Science.gov (United States)

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  20. Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)

    Science.gov (United States)

    Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.

    2013-12-01

    -NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)

  1. Geant4 simulation of the response of phosphor screens for X-ray imaging

    International Nuclear Information System (INIS)

    Pistrui-Maximean, S.A.; Freud, N.; Letang, J.M.; Koch, A.; Munier, B.; Walenta, A.H.; Montarou, G.; Babot, D.

    2006-01-01

    In order to predict and optimize the response of phosphor screens, it is important to understand the role played by the different physical processes inside the scintillator layer. A simulation model based on the Monte Carlo code Geant4 was developed to determine the Modulation Transfer Function (MTF) of phosphor screens for energies used in X-ray medical imaging and nondestructive testing applications. The visualization of the dose distribution inside the phosphor layer gives an insight into how the MTF is progressively degraded by X-ray and electron transport. The simulation model allows to study the influence of physical and technological parameters on the detector performances, as well as to design and optimize new detector configurations. Preliminary MTF measurements have been carried out and agreement with experimental data has been found in the case of a commercial screen (Kodak Lanex Fine) at an X-ray tube potential of 100 kV. Further validation with other screens (transparent or granular) at different energies is under way

  2. Geant4 simulation of the response of phosphor screens for X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [Laboratory of Nondestructive Testing using Ionizing Radiation, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint Exupery, 69621 Villeurbanne Cedex (France)]. E-mail: simona.pistrui@insa-lyon.fr; Freud, N. [Laboratory of Nondestructive Testing using Ionizing Radiation, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint Exupery, 69621 Villeurbanne Cedex (France); Letang, J.M. [Laboratory of Nondestructive Testing using Ionizing Radiation, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint Exupery, 69621 Villeurbanne Cedex (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Munier, B. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Department of Detectors and Electronics, FB Physik, University of Siegen, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere Cedex (France); Babot, D. [Laboratory of Nondestructive Testing using Ionizing Radiation, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint Exupery, 69621 Villeurbanne Cedex (France)

    2006-07-01

    In order to predict and optimize the response of phosphor screens, it is important to understand the role played by the different physical processes inside the scintillator layer. A simulation model based on the Monte Carlo code Geant4 was developed to determine the Modulation Transfer Function (MTF) of phosphor screens for energies used in X-ray medical imaging and nondestructive testing applications. The visualization of the dose distribution inside the phosphor layer gives an insight into how the MTF is progressively degraded by X-ray and electron transport. The simulation model allows to study the influence of physical and technological parameters on the detector performances, as well as to design and optimize new detector configurations. Preliminary MTF measurements have been carried out and agreement with experimental data has been found in the case of a commercial screen (Kodak Lanex Fine) at an X-ray tube potential of 100 kV. Further validation with other screens (transparent or granular) at different energies is under way.

  3. Multilaboratory particle image velocimetry analysis of the FDA benchmark nozzle model to support validation of computational fluid dynamics simulations.

    Science.gov (United States)

    Hariharan, Prasanna; Giarra, Matthew; Reddy, Varun; Day, Steven W; Manning, Keefe B; Deutsch, Steven; Stewart, Sandy F C; Myers, Matthew R; Berman, Michael R; Burgreen, Greg W; Paterson, Eric G; Malinauskas, Richard A

    2011-04-01

    This study is part of a FDA-sponsored project to evaluate the use and limitations of computational fluid dynamics (CFD) in assessing blood flow parameters related to medical device safety. In an interlaboratory study, fluid velocities and pressures were measured in a nozzle model to provide experimental validation for a companion round-robin CFD study. The simple benchmark nozzle model, which mimicked the flow fields in several medical devices, consisted of a gradual flow constriction, a narrow throat region, and a sudden expansion region where a fluid jet exited the center of the nozzle with recirculation zones near the model walls. Measurements of mean velocity and turbulent flow quantities were made in the benchmark device at three independent laboratories using particle image velocimetry (PIV). Flow measurements were performed over a range of nozzle throat Reynolds numbers (Re(throat)) from 500 to 6500, covering the laminar, transitional, and turbulent flow regimes. A standard operating procedure was developed for performing experiments under controlled temperature and flow conditions and for minimizing systematic errors during PIV image acquisition and processing. For laminar (Re(throat)=500) and turbulent flow conditions (Re(throat)≥3500), the velocities measured by the three laboratories were similar with an interlaboratory uncertainty of ∼10% at most of the locations. However, for the transitional flow case (Re(throat)=2000), the uncertainty in the size and the velocity of the jet at the nozzle exit increased to ∼60% and was very sensitive to the flow conditions. An error analysis showed that by minimizing the variability in the experimental parameters such as flow rate and fluid viscosity to less than 5% and by matching the inlet turbulence level between the laboratories, the uncertainties in the velocities of the transitional flow case could be reduced to ∼15%. The experimental procedure and flow results from this interlaboratory study (available

  4. Modeling LCD Displays with Local Backlight Dimming for Image Quality Assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; Forchhammer, Søren

    2011-01-01

    for evaluating the signal quality distortion related directly to digital signal processing, such as compression. However, the physical characteristics of the display device also pose a significant impact on the overall perception. In order to facilitate image quality assessment on modern liquid crystaldisplays...... (LCD) using light emitting diode (LED) backlight with local dimming, we present the essential considerations and guidelines for modeling the characteristics of displays with high dynamic range (HDR) and locally adjustable backlight segments. The representation of the image generated by the model can...... be assessed using the traditional objective metrics, and therefore the proposed approach is useful for assessing the performance of different backlight dimming algorithms in terms of resulting quality and power consumption in a simulated environment. We have implemented the proposed model in C++ and compared...

  5. Improvement in visibility of simulated lung nodules on computed radiography (CR) chest images by use of temporal subtraction technique

    International Nuclear Information System (INIS)

    Oda, Nobuhiro; Fujimoto, Keiji; Murakami, Seiichi; Katsuragawa, Shigehiko; Doi, Kunio; Nakata, Hajime

    1999-01-01

    A temporal subtraction image obtained by subtraction of a previous image from a current one can enhance interval change on chest images. In this study, we compared the visibility of simulated lung nodules on CR images with and without temporal subtraction. Chest phantom images without and with simulated nodules were obtained as previous and current images, respectively, by a CR system. Then, subtraction images were produced with an iterative image warping technique. Twelve simulated nodules were attached on various locations of the chest phantom. The diameter of nodules having a CT number of 47 ranged from 3 mm to 10 mm. Seven radiologists subjectively evaluated the visibility of simulated nodules on CR images with and without temporal subtraction using a three-point rating scale (0: invisible, +1: questionable, +2:visible). The minimum diameter of simulated nodules visible at a frequency greater than 50% was 4 mm on the CR images with temporal subtraction and 6 mm on those without. Our results indicated that the subtraction images clearly improved the visibility of simulated nodules. (author)

  6. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  7. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  8. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  9. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  10. Small animal positron emission tomography with gas detectors. Simulations, prototyping, and quantitative image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vernekohl, Don

    2014-04-15

    plain surfaces, predicted by simulations, was observed. Third, as the production of photon converters is time consuming and expensive, it was investigated whether or not thin gas detectors with single-lead-layer-converters would be an alternative to the HIDAC converter design. Following simulations, those concepts potentially offer impressive coincidence sensitivities up to 24% for plain lead foils and up to 40% for perforated lead foils. Fourth, compared to other PET scanner systems, the HIDAC concept suffers from missing energy information. Consequently, a substantial amount of scatter events can be found within the measured data. On the basis of image reconstruction and correction techniques the influence of random and scatter events and their characteristics on several simulated phantoms were presented. It was validated with the HIDAC simulator that the applied correction technique results in perfectly corrected images. Moreover, it was shown that the simulator is a credible tool to provide quantitatively improved images. Fifth, a new model for the non-collinearity of the positronium annihilation was developed, since it was observed that the model implemented in the GATE simulator does not correspond to the measured observation. The input parameter of the new model was trimmed to match to a point source measurement. The influence of both models on the spatial resolution was studied with three different reconstruction methods. Furthermore, it was demonstrated that the reduction of converter depth, proposed for increased sensitivity, also has an advantage on the spatial resolution and that a reduction of the FOV from 17 cm to 4 cm (with only 2 detector heads) results in a remarkable sensitivity increase of 150% and a substantial increase in spatial resolution. The presented simulations for the spatial resolution analysis used an intrinsic detector resolution of 0.125 x 0.125 x 3.2 mm{sup 3} and were able to reach fair resolutions down to 0.9-0.5 mm, which is an

  11. Simulation based evaluation of the designs of the Advanced Gamma-ray Imageing System (AGIS)

    Science.gov (United States)

    Bugaev, Slava; Buckley, James; Digel, Seth; Funk, Stephen; Konopelko, Alex; Krawczynski, Henric; Lebohec, Steohan; Maier, Gernot; Vassiliev, Vladimir

    2009-05-01

    The AGIS project under design study, is a large array of imaging atmospheric Cherenkov telescopes for gamma-rays astronomy between 40GeV and 100 TeV. In this paper we present the ongoing simulation effort to model the considered design approaches as a function of the main parameters such as array geometry, telescope optics and camera design in such a way the gamma ray observation capabilities can be optimized against the overall project cost.

  12. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  13. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.

    Science.gov (United States)

    Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel

    2016-08-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery.

  14. Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates

    Science.gov (United States)

    Walton, Otis R.; Johnson, Scott M.

    2010-01-01

    The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging

  15. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  16. Registered error between PET and CT images confirmed by a water model

    International Nuclear Information System (INIS)

    Chen Yangchun; Fan Mingwu; Xu Hao; Chen Ping; Zhang Chunlin

    2012-01-01

    The registered error between PET and CT imaging system was confirmed by a water model simulating clinical cases. A barrel of 6750 mL was filled with 59.2 MBq [ 18 F]-FDG and scanned after 80 min by 2 dimension model PET/CT. The CT images were used to attenuate the PET images. The CT/PET images were obtained by image morphological processing analyses without barrel wall. The relationship of the water image centroids of CT and PET images was established by linear regression analysis, and the registered error between PET and CT image could be computed one slice by one slice. The alignment program was done 4 times following the protocol given by GE Healthcare. Compared with centroids of water CT images, centroids of PET images were shifted to X-axis (0.011slice+0.63) mm, to Y-axis (0.022×slice+1.35) mm. To match CT images, PET images should be translated along X-axis (-2.69±0.15) mm, Y-axis (0.43±0.11) mm, Z-axis (0.86±0.23) mm, and X-axis be rotated by (0.06±0.07)°, Y-axis by (-0.01±0.08)°, and Z-axis by (0.11±0.07)°. So, the systematic registered error was not affected by load and its distribution. By finding the registered error between PET and CT images for coordinate rotation random error, the water model could confirm the registered results of PET-CT system corrected by Alignment parameters. (authors)

  17. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  18. Multi-object segmentation framework using deformable models for medical imaging analysis.

    Science.gov (United States)

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  19. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  20. 3D modeling of satellite spectral images, radiation budget and energy budget of urban landscapes

    Science.gov (United States)

    Gastellu-Etchegorry, J. P.

    2008-12-01

    DART EB is a model that is being developed for simulating the 3D (3 dimensional) energy budget of urban and natural scenes, possibly with topography and atmosphere. It simulates all non radiative energy mechanisms (heat conduction, turbulent momentum and heat fluxes, water reservoir evolution, etc.). It uses DART model (Discrete Anisotropic Radiative Transfer) for simulating radiative mechanisms: 3D radiative budget of 3D scenes and their remote sensing images expressed in terms of reflectance or brightness temperature values, for any atmosphere, wavelength, sun/view direction, altitude and spatial resolution. It uses an innovative multispectral approach (ray tracing, exact kernel, discrete ordinate techniques) over the whole optical domain. This paper presents two major and recent improvements of DART for adapting it to urban canopies. (1) Simulation of the geometry and optical characteristics of urban elements (houses, etc.). (2) Modeling of thermal infrared emission by vegetation and urban elements. The new DART version was used in the context of the CAPITOUL project. For that, districts of the Toulouse urban data base (Autocad format) were translated into DART scenes. This allowed us to simulate visible, near infrared and thermal infrared satellite images of Toulouse districts. Moreover, the 3D radiation budget was used by DARTEB for simulating the time evolution of a number of geophysical quantities of various surface elements (roads, walls, roofs). Results were successfully compared with ground measurements of the CAPITOUL project.

  1. Thick tissue diffusion model with binding to optimize topical staining in fluorescence breast cancer margin imaging

    Science.gov (United States)

    Xu, Xiaochun; Kang, Soyoung; Navarro-Comes, Eric; Wang, Yu; Liu, Jonathan T. C.; Tichauer, Kenneth M.

    2018-03-01

    Intraoperative tumor/surgical margin assessment is required to achieve higher tumor resection rate in breast-conserving surgery. Though current histology provides incomparable accuracy in margin assessment, thin tissue sectioning and the limited field of view of microscopy makes histology too time-consuming for intraoperative applications. If thick tissue, wide-field imaging can provide an acceptable assessment of tumor cells at the surface of resected tissues, an intraoperative protocol can be developed to guide the surgery and provide immediate feedback for surgeons. Topical staining of margins with cancer-targeted molecular imaging agents has the potential to provide the sensitivity needed to see microscopic cancer on a wide-field image; however, diffusion and nonspecific retention of imaging agents in thick tissue can significantly diminish tumor contrast with conventional methods. Here, we present a mathematical model to accurately simulate nonspecific retention, binding, and diffusion of imaging agents in thick tissue topical staining to guide and optimize future thick tissue staining and imaging protocol. In order to verify the accuracy and applicability of the model, diffusion profiles of cancer targeted and untargeted (control) nanoparticles at different staining times in A431 tumor xenografts were acquired for model comparison and tuning. The initial findings suggest the existence of nonspecific retention in the tissue, especially at the tissue surface. The simulator can be used to compare the effect of nonspecific retention, receptor binding and diffusion under various conditions (tissue type, imaging agent) and provides optimal staining and imaging protocols for targeted and control imaging agent.

  2. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  3. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  4. 3D element imaging using NSECT for the detection of renal cancer: a simulation study in MCNP

    Science.gov (United States)

    Viana, R. S.; Agasthya, G. A.; Yoriyaz, H.; Kapadia, A. J.

    2013-09-01

    This work describes a simulation study investigating the application of neutron stimulated emission computed tomography (NSECT) for noninvasive 3D imaging of renal cancer in vivo. Using MCNP5 simulations, we describe a method of diagnosing renal cancer in the body by mapping the 3D distribution of elements present in tumors using the NSECT technique. A human phantom containing the kidneys and other major organs was modeled in MCNP5. The element composition of each organ was based on values reported in literature. The two kidneys were modeled to contain elements reported in renal cell carcinoma (RCC) and healthy kidney tissue. Simulated NSECT scans were executed to determine the 3D element distribution of the phantom body. Elements specific to RCC and healthy kidney tissue were then analyzed to identify the locations of the diseased and healthy kidneys and generate tomographic images of the tumor. The extent of the RCC lesion inside the kidney was determined using 3D volume rendering. A similar procedure was used to generate images of each individual organ in the body. Six isotopes were studied in this work—32S, 12C, 23Na, 14N, 31P and 39K. The results demonstrated that through a single NSECT scan performed in vivo, it is possible to identify the location of the kidneys and other organs within the body, determine the extent of the tumor within the organ, and to quantify the differences between cancer and healthy tissue-related isotopes with p ≤ 0.05. All of the images demonstrated appropriate concentration changes between the organs, with some discrepancy observed in 31P, 39K and 23Na. The discrepancies were likely due to the low concentration of the elements in the tissue that were below the current detection sensitivity of the NSECT technique.

  5. 3D element imaging using NSECT for the detection of renal cancer: a simulation study in MCNP.

    Science.gov (United States)

    Viana, R S; Agasthya, G A; Yoriyaz, H; Kapadia, A J

    2013-09-07

    This work describes a simulation study investigating the application of neutron stimulated emission computed tomography (NSECT) for noninvasive 3D imaging of renal cancer in vivo. Using MCNP5 simulations, we describe a method of diagnosing renal cancer in the body by mapping the 3D distribution of elements present in tumors using the NSECT technique. A human phantom containing the kidneys and other major organs was modeled in MCNP5. The element composition of each organ was based on values reported in literature. The two kidneys were modeled to contain elements reported in renal cell carcinoma (RCC) and healthy kidney tissue. Simulated NSECT scans were executed to determine the 3D element distribution of the phantom body. Elements specific to RCC and healthy kidney tissue were then analyzed to identify the locations of the diseased and healthy kidneys and generate tomographic images of the tumor. The extent of the RCC lesion inside the kidney was determined using 3D volume rendering. A similar procedure was used to generate images of each individual organ in the body. Six isotopes were studied in this work - (32)S, (12)C, (23)Na, (14)N, (31)P and (39)K. The results demonstrated that through a single NSECT scan performed in vivo, it is possible to identify the location of the kidneys and other organs within the body, determine the extent of the tumor within the organ, and to quantify the differences between cancer and healthy tissue-related isotopes with p ≤ 0.05. All of the images demonstrated appropriate concentration changes between the organs, with some discrepancy observed in (31)P, (39)K and (23)Na. The discrepancies were likely due to the low concentration of the elements in the tissue that were below the current detection sensitivity of the NSECT technique.

  6. Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT

    International Nuclear Information System (INIS)

    Sohlberg, A.O.; Kajaste, M.T.

    2012-01-01

    Monte Carlo (MC)-simulations have proved to be a valuable tool in studying single photon emission computed tomography (SPECT)-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against 123 I point source measurements made with a clinical gamma camera system and against 99m Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with 99m Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality. (author)

  7. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  8. Analysis and modeling of electronic portal imaging exit dose measurements

    International Nuclear Information System (INIS)

    Pistorius, S.; Yeboah, C.

    1995-01-01

    In spite of the technical advances in treatment planning and delivery in recent years, it is still unclear whether the recommended accuracy in dose delivery is being achieved. Electronic portal imaging devices, now in routine use in many centres, have the potential for quantitative dosimetry. As part of a project which aims to develop an expert-system based On-line Dosimetric Verification (ODV) system we have investigated and modelled the dose deposited in the detector of a video based portal imaging system. Monte Carlo techniques were used to simulate gamma and x-ray beams in homogeneous slab phantom geometries. Exit doses and energy spectra were scored as a function of (i) slab thickness, (ii) field size and (iii) the air gap between the exit surface and the detector. The results confirm that in order to accurately calculate the dose in the high atomic number Gd 2 O 2 S detector for a range of air gaps, field sizes and slab thicknesses both the magnitude of the primary and scattered components and their effective energy need to be considered. An analytic, convolution based model which attempts to do this is proposed. The results of the simulation and the ability of the model to represent these data will be presented and discussed. This model is used to show that, after training, a back-propagation feed-forward cascade correlation neural network has the ability to identify and recognise the cause of, significant dosimetric errors

  9. RADIATIVE MODELS OF SGR A* FROM GRMHD SIMULATIONS

    International Nuclear Information System (INIS)

    Moscibrodzka, Monika; Gammie, Charles F.; Dolence, Joshua C.; Shiokawa, Hotaka; Leung, Po Kin

    2009-01-01

    Using flow models based on axisymmetric general relativistic magnetohydrodynamics simulations, we construct radiative models for Sgr A*. Spectral energy distributions (SEDs) that include the effects of thermal synchrotron emission and absorption, and Compton scattering, are calculated using a Monte Carlo technique. Images are calculated using a ray-tracing scheme. All models are scaled so that the 230 GHz flux density is 3.4 Jy. The key model parameters are the dimensionless black hole spin a * , the inclination i, and the ion-to-electron temperature ratio T i /T e . We find that (1) models with T i /T e = 1 are inconsistent with the observed submillimeter spectral slope; (2) the X-ray flux is a strongly increasing function of a * ; (3) the X-ray flux is a strongly increasing function of i; (4) 230 GHz image size is a complicated function of i, a * , and T i /T e , but the T i /T e = 10 models are generally large and at most marginally consistent with the 230 GHz very long baseline interferometry (VLBI) data; (5) for models with T i /T e = 10 and i = 85 deg. the event horizon is cloaked behind a synchrotron photosphere at 230 GHz and will not be seen by VLBI, but these models overproduce near-infrared and X-ray flux; (6) in all models whose SEDs are consistent with observations, the event horizon is uncloaked at 230 GHz; (7) the models that are most consistent with the observations have a * ∼ 0.9. We finish with a discussion of the limitations of our model and prospects for future improvements.

  10. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  11. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece); Loudos, George [Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [Medical Information Processing Laboratory (LaTIM), National Institute of Health and Medical Research (INSERM), 29609 Brest (France)

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines

  12. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms

    DEFF Research Database (Denmark)

    Marbjerg, Gerd Høy; Brunskog, Jonas; Jeong, Cheol-Ho

    2015-01-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse...... radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber...

  13. In silico imaging: Definition, possibilities and challenges

    International Nuclear Information System (INIS)

    Badano, Aldo

    2011-01-01

    The capability to simulate the imaging performance of new detector concepts is crucial to develop the next generation of medical imaging systems. Proper modeling tools allow for optimal designs that maximize image quality while minimizing patient and occupational radiation doses. In this context, in silico imaging has become an emerging field of imaging research. This paper reviews current progress and challenges in the simulation of imaging systems with a focus on Monte Carlo approaches to X-ray detector modeling, acceleration approaches, and validation strategies.

  14. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  15. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  16. An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans

    Directory of Open Access Journals (Sweden)

    Elizaveta Zabolotskikh

    2014-03-01

    Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.

  17. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  18. 2D imaging simulations of a small animal PET scanner with DOI measurement. jPET-RD

    International Nuclear Information System (INIS)

    Yamaya, Taiga; Hagiwara, Naoki

    2005-01-01

    We present a preliminary study on the design of a high sensitivity small animal depth of interaction (DOI)-PET scanner: jPET-RD (for Rodents with DOI detectors), which will contribute to molecular imaging. The 4-layer DOI block detector for the jPET-RD that consists of scintillation crystals (1.4 mm x 1.4 mm x 4.5 mm) and a flat panel position-sensitive photomultiplier tube (52 mm x 52 mm) was previously proposed. In this paper, we investigate imaging performance of the jPET-RD through numerical simulations. The scanner has a hexagonal geometry with a small diameter and a large axial aperture. Therefore DOI information is expected to improve resolution uniformity in the whole field of view (FOV). We simulate the scanner for various parameters of the number of DOI channels and the crystal length. Simulated data are reconstructed using the maximum likelihood expectation maximization with accurate system modeling. The trade-off results between background noise and spatial resolution show that only shortening the length of crystal does not improve the trade-off at all, and that 4-layer DOI information improves uniformity of spatial resolution in the whole FOV. Excellent performance of the jPET-RD can be expected based on the numerical simulation results. (author)

  19. Comparison of Large Eddy Simulations and κ-ε Modelling of Fluid Velocity and Tracer Concentration in Impinging Jet Mixers

    Directory of Open Access Journals (Sweden)

    Wojtas Krzysztof

    2015-06-01

    Full Text Available Simulations of turbulent mixing in two types of jet mixers were carried out using two CFD models, large eddy simulation and κ-ε model. Modelling approaches were compared with experimental data obtained by the application of particle image velocimetry and planar laser-induced fluorescence methods. Measured local microstructures of fluid velocity and inert tracer concentration can be used for direct validation of numerical simulations. Presented results show that for higher tested values of jet Reynolds number both models are in good agreement with the experiments. Differences between models were observed for lower Reynolds numbers when the effects of large scale inhomogeneity are important.

  20. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  1. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  2. Rapid dual-tracer PTSM+ATSM PET imaging of tumour blood flow and hypoxia: a simulation study

    International Nuclear Information System (INIS)

    Rust, T C; Kadrmas, D J

    2006-01-01

    Blood flow and hypoxia are interrelated aspects of physiology that affect cancer treatment and response. Cu-PTSM and Cu-ATSM are related PET tracers for blood flow and hypoxia, and the ability to rapidly image both tracers in a single scan would bring several advantages over conventional single-tracer techniques. Using dynamic imaging with staggered injections, overlapping signals for multiple PET tracers may be recovered utilizing information from kinetics and radioactive decay. In this work, rapid dual-tracer PTSM+ATSM PET was simulated and tested as a function of injection delay, order and relative dose for several copper isotopes, and the results were compared relative to separate single-tracer data. Time-activity curves representing a broad range of tumour blood flow and hypoxia levels were simulated, and parallel dual-tracer compartment modelling was used to recover the signals for each tracer. The main results were tested further using a torso phantom simulation of PET tumour imaging. Using scans as short as 30 minutes, the dual-tracer method provided measures of blood flow and hypoxia similar to single-tracer imaging. The best performance was obtained by injecting PTSM first and using a somewhat higher dose for ATSM. Comparable results for different copper isotopes suggest that tracer kinetics with staggered injections play a more important role than radioactive decay in the signal separation process. Rapid PTSM+ATSM PET has excellent potential for characterizing both tumour blood flow and hypoxia in a single, fast scan, provided that technological hurdles related to algorithm development and routine use can be overcome

  3. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    Science.gov (United States)

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  4. Simulation of Profiles Data For Computed Tomography Using Object Images

    International Nuclear Information System (INIS)

    Srisatit, Somyot

    2007-08-01

    Full text: It is necessary to use a scanning system to obtain the profiles data for computed tomographic images. A good profile data can give a good contrast and resolution. For the scanning system, high efficiency and high price of radiation equipments must be used. So, the simulated profiles data to obtain a good CT images quality as same as the real one for the demonstration can be used

  5. Quantitative comparison of hemodynamics in simulated and 3D angiography models of cerebral aneurysms by use of computational fluid dynamics.

    Science.gov (United States)

    Saho, Tatsunori; Onishi, Hideo

    2015-07-01

    In this study, we evaluated hemodynamics using simulated models and determined how cerebral aneurysms develop in simulated and patient-specific models based on medical images. Computational fluid dynamics (CFD) was analyzed by use of OpenFOAM software. Flow velocity, stream line, and wall shear stress (WSS) were evaluated in a simulated model aneurysm with known geometry and in a three-dimensional angiographic model. The ratio of WSS at the aneurysm compared with that at the basilar artery was 1:10 in simulated model aneurysms with a diameter of 10 mm and 1:18 in the angiographic model, indicating similar tendencies. Vortex flow occurred in both model aneurysms, and the WSS decreased in larger model aneurysms. The angiographic model provided accurate CFD information, and the tendencies of simulated and angiographic models were similar. These findings indicate that hemodynamic effects are involved in the development of aneurysms.

  6. FDTD Modeling of Nano- and Bio-Photonic Imaging

    DEFF Research Database (Denmark)

    Tanev, Stoyan; Tuchin, Valery; Pond, James

    2010-01-01

    to address newly emerging problems and not so much on its mathematical formulation. We will first discuss the application of a traditional formulation of the FDTD approach to the modeling of sub-wavelength photonics structures. Next, a modified total/scattered field FDTD approach will be applied...... to the modeling of biophotonics applications including Optical Phase Contrast Microscope (OPCM) imaging of cells containing gold nanoparticles (NPs) as well as its potential application as a modality for in vivo flow cytometry configurations.......In this paper we focus on the discussion of two recent unique applications of the Finite-Difference Time-Domain (FDTD) simulation method to the design and modeling of advanced nano- and bio-photonic problems. The approach that is adopted here focuses on the potential of the FDTD methodology...

  7. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  8. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-01-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  9. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  10. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X.

    2011-01-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm 3 and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  11. Geological terrain models

    Science.gov (United States)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.

    1981-01-01

    The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.

  12. Edge Detection on Images of Pseudoimpedance Section Supported by Context and Adaptive Transformation Model Images

    Directory of Open Access Journals (Sweden)

    Kawalec-Latała Ewa

    2014-03-01

    Full Text Available Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.

  13. Towards the development of a comprehensive model of an electronic portal imaging device using Geant4

    International Nuclear Information System (INIS)

    Blake, S.; Kuncic, Z.; Vial, P.; Holloway, L.

    2010-01-01

    Full text: This work represents the first stage of an ongoing study to investigate the physical processes occurring within electronic portal imaging devices (EPlDs), including the effects of optical scattering on image quality and dosimetry. The objective of this work was to develop an initial Monte Carlo model of a linear accelerator (Iinac) beam and an EPID. The ability to simulate the radiation transport of both high energy and optical photons in a single Monte Carlo model was tested. Data from the Phase-space database for external beam radiotherapy (International Atomic Energy Agency, IAEA) was used with the Geant4 toolkit to construct a model of a Siemens Primus linac 6 MY photon source. Dose profiles and percent depth dose (PDD) curves were extracted from simulations of dose in water and compared to experimental measurements. A preliminary EPID model was developed to incorporate both high energy radiation and optical photon transport. Agreement in dose profiles inside the open beam was within 1.6%. Mean agreement in PDD curves beyond depth of dose maximum was within 6.1 % (local percent difference). The radiation transport of both high energy and optical photons were simulated and visualized in the EPID model. Further work is required to experimentally validate the EPID model. The comparison of simulated dose in water with measurements indicates that the IAEA phase-space may represent an accurate model of a linac source. We have demonstrated the feasibility of developing a comprehensive EPID model incorporating both high energy and optical physics in Geant4. (author)

  14. Accurate study of FosPeg® distribution in a mouse model using fluorescence imaging technique and fluorescence white monte carlo simulations

    DEFF Research Database (Denmark)

    Xie, Haiyan; Liu, Haichun; Svenmarker, Pontus

    2010-01-01

    Fluorescence imaging is used for quantitative in vivo assessment of drug concentration. Light attenuation in tissue is compensated for through Monte-Carlo simulations. The intrinsic fluorescence intensity, directly proportional to the drug concentration, could be obtained....

  15. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  16. Lunar photometric modelling with SMART-1/AMIE imaging data

    International Nuclear Information System (INIS)

    Wilkman, O.; Muinonen, K.; Videen, G.; Josset, J.-L.; Souchon, A.

    2014-01-01

    We investigate the light-scattering properties of the lunar mare areas. A large photometric dataset was extracted from images taken by the AMIE camera on board the SMART-1 spacecraft. Inter-particle shadowing effects in the regolith are modelled using ray-tracing simulations, and then a phase function is fit to the data using Bayesian techniques and Markov chain Monte Carlo. Additionally, the data are fit with phase functions computed from radiative-transfer coherent-backscatter (RT-CB) simulations. The results indicate that the lunar photometry, including both the opposition effect and azimuthal effects, can be explained well with a combination of inter-particle shadowing and coherent backscattering. Our results produce loose constraints on the mare physical properties. The RT-CB results indicate that the scattering volume element is optically thick. In both the Bayesian analysis and the RT-CB fit, models with lower packing density and/or higher surface roughness always produce better fits to the data than densely packed, smoother ones

  17. New Parametric Imaging Algorithm for Quantification of Binding Parameter in non-reversible compartment model: MLAIR

    International Nuclear Information System (INIS)

    Kim, Su Jin; Lee, Jae Sung; Kim, Yu Kyeong; Lee, Dong Soo

    2007-01-01

    Parametric imaging allows us analysis of the entire brain or body image. Graphical approaches are commonly employed to generate parametric imaging through linear or multilinear regression. However, this linear regression method has limited accuracy due to bias in high level of noise data. Several methods have been proposed to reduce bias for linear regression estimation especially in reversible model. In this study, we focus on generating a net accumulation rate (K i ), which is related to binding parameter in brain receptor study, parametric imaging in an irreversible compartment model using multiple linear analysis. The reliability of a newly developed multiple linear analysis method (MLAIR) was assessed through the Monte Carlo simulation, and we applied it to a [ 11 C]MeNTI PET for opioid receptor

  18. From 4D Medical Images (CT, MRI, and Ultrasound to 4D Structured Mesh Models of the Left Ventricular Endocardium for Patient-Specific Simulations

    Directory of Open Access Journals (Sweden)

    Federico Canè

    2018-01-01

    Full Text Available With cardiovascular disease (CVD remaining the primary cause of death worldwide, early detection of CVDs becomes essential. The intracardiac flow is an important component of ventricular function, motion kinetics, wash-out of ventricular chambers, and ventricular energetics. Coupling between Computational Fluid Dynamics (CFD simulations and medical images can play a fundamental role in terms of patient-specific diagnostic tools. From a technical perspective, CFD simulations with moving boundaries could easily lead to negative volumes errors and the sudden failure of the simulation. The generation of high-quality 4D meshes (3D in space + time with 1-to-1 vertex becomes essential to perform a CFD simulation with moving boundaries. In this context, we developed a semiautomatic morphing tool able to create 4D high-quality structured meshes starting from a segmented 4D dataset. To prove the versatility and efficiency, the method was tested on three different 4D datasets (Ultrasound, MRI, and CT by evaluating the quality and accuracy of the resulting 4D meshes. Furthermore, an estimation of some physiological quantities is accomplished for the 4D CT reconstruction. Future research will aim at extending the region of interest, further automation of the meshing algorithm, and generating structured hexahedral mesh models both for the blood and myocardial volume.

  19. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  20. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  1. Elastic models application for thorax image registration

    International Nuclear Information System (INIS)

    Correa Prado, Lorena S; Diaz, E Andres Valdez; Romo, Raul

    2007-01-01

    This work consist of the implementation and evaluation of elastic alignment algorithms of biomedical images, which were taken at thorax level and simulated with the 4D NCAT digital phantom. Radial Basis Functions spatial transformations (RBF), a kind of spline, which allows carrying out not only global rigid deformations but also local elastic ones were applied, using a point-matching method. The applied functions were: Thin Plate Spline (TPS), Multiquadric (MQ) Gaussian and B-Spline, which were evaluated and compared by means of calculating the Target Registration Error and similarity measures between the registered images (the squared sum of intensity differences (SSD) and correlation coefficient (CC)). In order to value the user incurred error in the point-matching and segmentation tasks, two algorithms were also designed that calculate the Fiduciary Localization Error. TPS and MQ were demonstrated to have better performance than the others. It was proved RBF represent an adequate model for approximating the thorax deformable behaviour. Validation algorithms showed the user error was not significant

  2. Frequency-difference MIT imaging of cerebral haemorrhage with a hemispherical coil array: numerical modelling.

    Science.gov (United States)

    Zolgharni, M; Griffiths, H; Ledger, P D

    2010-08-01

    The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.

  3. Dark Energy Studies with LSST Image Simulations, Final Report

    International Nuclear Information System (INIS)

    Peterson, John Russell

    2016-01-01

    This grant funded the development and dissemination of the Photon Simulator (PhoSim) for the purpose of studying dark energy at high precision with the upcoming Large Synoptic Survey Telescope (LSST) astronomical survey. The work was in collaboration with the LSST Dark Energy Science Collaboration (DESC). Several detailed physics improvements were made in the optics, atmosphere, and sensor, a number of validation studies were performed, and a significant number of usability features were implemented. Future work in DESC will use PhoSim as the image simulation tool for data challenges used by the analysis groups.

  4. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  5. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  6. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT

    Science.gov (United States)

    O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.

    2018-02-01

    Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5  ⩽  N  ⩽  9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5  ±  4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while

  7. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  8. Pre-operative simulation of periacetabular osteotomy via a three-dimensional model constructed from salt

    Directory of Open Access Journals (Sweden)

    Fukushima Kensuke

    2017-01-01

    Full Text Available Introduction: Periacetabular osteotomy (PAO is an effective joint-preserving procedure for young adults with developmental dysplasia of the hip. Although PAO provides excellent radiographic and clinical results, it is a technically demanding procedure with a distinct learning curve that requires careful 3D planning and, above all, has a number of potential complications. We therefore developed a pre-operative simulation method for PAO via creation of a new full-scale model. Methods: The model was prepared from the patient’s Digital Imaging and Communications in Medicine (DICOM formatted data from computed tomography (CT, for construction and assembly using 3D printing technology. A major feature of our model is that it is constructed from salt. In contrast to conventional models, our model provides a more accurate representation, at a lower manufacturing cost, and requires a shorter production time. Furthermore, our model realized simulated operation normally with using a chisel and drill without easy breakage or fissure. We were able to easily simulate the line of osteotomy and confirm acetabular version and coverage after moving to the osteotomized fragment. Additionally, this model allowed a dynamic assessment that avoided anterior impingement following the osteotomy. Results: Our models clearly reflected the anatomical shape of the patient’s hip. Our models allowed for surgical simulation, making realistic use of the chisel and drill. Our method of pre-operative simulation for PAO allowed for the assessment of accurate osteotomy line, determination of the position of the osteotomized fragment, and prevented anterior impingement after the operation. Conclusion: Our method of pre-operative simulation might improve the safety, accuracy, and results of PAO.

  9. Simbol-X Formation Flight and Image Reconstruction

    Science.gov (United States)

    Civitani, M.; Djalal, S.; Le Duigou, J. M.; La Marle, O.; Chipaux, R.

    2009-05-01

    Simbol-X is the first operational mission relying on two satellites flying in formation. The dynamics of the telescope, due to the formation flight concept, raises a variety of problematic, like image reconstruction, that can be better evaluated via a simulation tools. We present here the first results obtained with Simulos, simulation tool aimed to study the relative spacecrafts navigation and the weight of the different parameters in image reconstruction and telescope performance evaluation. The simulation relies on attitude and formation flight sensors models, formation flight dynamics and control, mirror model and focal plane model, while the image reconstruction is based on the Line of Sight (LOS) concept.

  10. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  11. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  12. Improvement of Fuzzy Image Contrast Enhancement Using Simulated Ergodic Fuzzy Markov Chains

    Directory of Open Access Journals (Sweden)

    Behrouz Fathi-Vajargah

    2014-01-01

    Full Text Available This paper presents a novel fuzzy enhancement technique using simulated ergodic fuzzy Markov chains for low contrast brain magnetic resonance imaging (MRI. The fuzzy image contrast enhancement is proposed by weighted fuzzy expected value. The membership values are then modified to enhance the image using ergodic fuzzy Markov chains. The qualitative performance of the proposed method is compared to another method in which ergodic fuzzy Markov chains are not considered. The proposed method produces better quality image.

  13. Anthropomorphic thorax phantom for cardio-respiratory motion simulation in tomographic imaging

    Science.gov (United States)

    Bolwin, Konstantin; Czekalla, Björn; Frohwein, Lynn J.; Büther, Florian; Schäfers, Klaus P.

    2018-02-01

    Patient motion during medical imaging using techniques such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), or single emission computed tomography (SPECT) is well known to degrade images, leading to blurring effects or severe artifacts. Motion correction methods try to overcome these degrading effects. However, they need to be validated under realistic conditions. In this work, a sophisticated anthropomorphic thorax phantom is presented that combines several aspects of a simulator for cardio-respiratory motion. The phantom allows us to simulate various types of cardio-respiratory motions inside a human-like thorax, including features such as inflatable lungs, beating left ventricular myocardium, respiration-induced motion of the left ventricle, moving lung lesions, and moving coronary artery plaques. The phantom is constructed to be MR-compatible. This means that we can not only perform studies in PET, SPECT and CT, but also inside an MRI system. The technical features of the anthropomorphic thorax phantom Wilhelm are presented with regard to simulating motion effects in hybrid emission tomography and radiotherapy. This is supplemented by a study on the detectability of small coronary plaque lesions in PET/CT under the influence of cardio-respiratory motion, and a study on the accuracy of left ventricular blood volumes.

  14. A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)

    Science.gov (United States)

    Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv

    2014-03-01

    The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.

  15. IR characteristic simulation of city scenes based on radiosity model

    Science.gov (United States)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  16. Hemispherical reflectance model for passive images in an outdoor environment.

    Science.gov (United States)

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  17. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  18. Medical images of patients in voxel structures in high resolution for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, Leonardo S.; Menezes, Artur F.; Silva, Ademir X., E-mail: lboia@con.ufrj.b, E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear; Salmon Junior, Helio A. [Clinicas Oncologicas Integradas (COI), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work aims to present a computational process of conversion of tomographic and MRI medical images from patients in voxel structures to an input file, which will be manipulated in Monte Carlo Simulation code for tumor's radiotherapic treatments. The problem's scenario inherent to the patient is simulated by such process, using the volume element (voxel) as a unit of computational tracing. The head's voxel structure geometry has voxels with volumetric dimensions around 1 mm{sup 3} and a population of millions, which helps - in that way, for a realistic simulation and a decrease in image's digital process techniques for adjustments and equalizations. With such additional data from the code, a more critical analysis can be developed in order to determine the volume of the tumor, and the protection, beside the patients' medical images were borrowed by Clinicas Oncologicas Integradas (COI/RJ), joined to the previous performed planning. In order to execute this computational process, SAPDI computational system is used in a digital image process for optimization of data, conversion program Scan2MCNP, which manipulates, processes, and converts the medical images into voxel structures to input files and the graphic visualizer Moritz for the verification of image's geometry placing. (author)

  19. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  20. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  1. Phase contrast image simulations for electron holography of magnetic and electric fields.

    Science.gov (United States)

    Beleggia, Marco; Pozzi, Giulio

    2013-06-01

    The research on flux line lattices and pancake vortices in superconducting materials, carried out within a long and fruitful collaboration with Akira Tonomura and his group at the Hitachi Advanced Research Laboratory, led us to develop a mathematical framework, based on the reciprocal representation of the magnetic vector potential, that enables us to simulate realistic phase images of fluxons. The aim of this paper is to review the main ideas underpinning our computational framework and the results we have obtained throughout the collaboration. Furthermore, we outline how to generalize the approach to model other samples and structures of interest, in particular thin ferromagnetic films, ferromagnetic nanoparticles and p-n junctions.

  2. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  3. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  4. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  5. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  6. The model of illumination-transillumination for image enhancement of X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Lyu, Kwang Yeul [Shingu College, Sungnam (Korea, Republic of); Rhee, Sang Min [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2001-06-01

    In digital image processing, the homomorphic filtering approach is derived from an illumination - reflectance model of the image. It can also be used with an illumination-transillumination model X-ray film. Several X-ray images were applied to enhancement with histogram equalization and homomorphic filter based on an illumination-transillumination model. The homomorphic filter has proven theoretical claim of image density range compression and balanced contrast enhancement, and also was found a valuable tool to process analog X-ray images to digital images.

  7. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    International Nuclear Information System (INIS)

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  8. Development of computational small animal models and their applications in preclinical imaging and therapy research

    NARCIS (Netherlands)

    Xie, Tianwu; Zaidi, Habib

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal

  9. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  10. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  11. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  12. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  13. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Becchetti, M; Tian, X; Segars, P; Samei, E [Clinical Imaging Physics Group, Department of Radiology, Duke University Me, Durham, NC (United States)

    2015-06-15

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches.

  14. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    International Nuclear Information System (INIS)

    Becchetti, M; Tian, X; Segars, P; Samei, E

    2015-01-01

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches

  15. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  16. Automated Registration Of Images From Multiple Sensors

    Science.gov (United States)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  17. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Letang, J.-M.; Babot, D.

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or γ-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results

  18. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: nicolas.freud@insa-lyon.fr; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.

  19. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  20. Irrigant flow in the root canal: experimental validation of an unsteady Computational Fluid Dynamics model using high-speed imaging.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Versluis, M; Kastrinakis, E; van der Sluis, L W M

    2010-05-01

    To compare the results of a Computational Fluid Dynamics (CFD) simulation of the irrigant flow within a prepared root canal, during final irrigation with a syringe and a needle, with experimental high-speed visualizations and theoretical calculations of an identical geometry and to evaluate the effect of off-centre positioning of the needle inside the root canal. A CFD model was created to simulate irrigant flow from a side-vented needle inside a prepared root canal. Calculations were carried out for four different positions of the needle inside a prepared root canal. An identical root canal model was made from poly-dimethyl-siloxane (PDMS). High-speed imaging of the flow seeded with particles and Particle Image Velocimetry (PIV) were combined to obtain the velocity field inside the root canal experimentally. Computational, theoretical and experimental results were compared to assess the validity of the computational model. Comparison between CFD computations and experiments revealed good agreement in the velocity magnitude and vortex location and size. Small lateral displacements of the needle inside the canal had a limited effect on the flow field. High-speed imaging experiments together with PIV of the flow inside a simulated root canal showed a good agreement with the CFD model, even though the flow was unsteady. Therefore, the CFD model is able to predict reliably the flow in similar domains.

  1. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  2. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  3. Simulation of scintillating fiber gamma ray detectors for medical imaging

    International Nuclear Information System (INIS)

    Chaney, R.C.; Fenyves, E.J.; Antich, P.P.

    1990-01-01

    This paper reports on plastic scintillating fibers which have been shown to be effective for high spatial and time resolution of gamma rays. They may be expected to significantly improve the resolution of current medical imaging systems such as PET and SPECT. Monte Carlo simulation of imaging systems using these detectors, provides a means to optimize their performance in this application, as well as demonstrate their resolution and efficiency. Monte Carlo results are presented for PET and SPECT systems constructed using these detectors

  4. Using a method based on Potts Model to segment a micro-CT image stack of trabecular bones of femoral region

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Pedro H.A. de; Cabral, Manuela O.M., E-mail: andrade.pha@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Engenharia Nuclear; Vieira, Jose W.; Correia, Filipe L. de B., E-mail: jose.wilson59@uol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil); Lima, Fernando R. De A., E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (brazil)

    2015-07-01

    Exposure Computational Models are composed basically of an anthropomorphic phantom, a Monte Carlo (MC) code, and an algorithm simulator of the radioactive source. Tomographic phantoms are developed from medical images and must be pre-processed and segmented before being coupled to a MC code (which simulates the interaction of radiation with matter). This work presents a methodology used for treatment of micro-CT images stack of a femur, obtained from a 30 year old female skeleton provided by the Imaging Laboratory for Anthropology of the University of Bristol, UK. These images contain resolution of 60 micrometers and from these a block containing only 160 x 60 x 160 pixels of trabecular tissues and bone marrow was cut and saved as ⁎.sgi file (header + ⁎.raw file). The Grupo de Dosimetria Numerica (Recife-PE, Brazil) developed a software named Digital Image Processing (DIP), in which a method for segmentation based on a physical model for particle interaction known as Potts Model (or q-Ising) was implemented. This model analyzes the statistical dependence between sites in a network. In Potts Model, when the values of spin variables at neighboring sites are identical, it is assigned an 'energy of interaction' between them. Otherwise, it is said that the sites do not interact. Making an analogy between network sites and the pixels of a digital image and, moreover, between the spins variables and the intensity of the gray scale, it was possible to apply this model to obtain texture descriptors and segment the image. (author)

  5. Using a method based on Potts Model to segment a micro-CT image stack of trabecular bones of femoral region

    International Nuclear Information System (INIS)

    Andrade, Pedro H.A. de; Cabral, Manuela O.M.; Lima, Fernando R. De A.

    2015-01-01

    Exposure Computational Models are composed basically of an anthropomorphic phantom, a Monte Carlo (MC) code, and an algorithm simulator of the radioactive source. Tomographic phantoms are developed from medical images and must be pre-processed and segmented before being coupled to a MC code (which simulates the interaction of radiation with matter). This work presents a methodology used for treatment of micro-CT images stack of a femur, obtained from a 30 year old female skeleton provided by the Imaging Laboratory for Anthropology of the University of Bristol, UK. These images contain resolution of 60 micrometers and from these a block containing only 160 x 60 x 160 pixels of trabecular tissues and bone marrow was cut and saved as ⁎.sgi file (header + ⁎.raw file). The Grupo de Dosimetria Numerica (Recife-PE, Brazil) developed a software named Digital Image Processing (DIP), in which a method for segmentation based on a physical model for particle interaction known as Potts Model (or q-Ising) was implemented. This model analyzes the statistical dependence between sites in a network. In Potts Model, when the values of spin variables at neighboring sites are identical, it is assigned an 'energy of interaction' between them. Otherwise, it is said that the sites do not interact. Making an analogy between network sites and the pixels of a digital image and, moreover, between the spins variables and the intensity of the gray scale, it was possible to apply this model to obtain texture descriptors and segment the image. (author)

  6. Performance simulation of a MRPC-based PET imaging system

    Science.gov (United States)

    Roy, A.; Banerjee, A.; Biswas, S.; Chattopadhyay, S.; Das, G.; Saha, S.

    2014-10-01

    The less expensive and high resolution Multi-gap Resistive Plate Chamber (MRPC) opens up a new possibility to find an efficient alternative detector for the Time of Flight (TOF) based Positron Emission Tomography, where the sensitivity of the system depends largely on the time resolution of the detector. In a layered structure, suitable converters can be used to increase the photon detection efficiency. In this work, we perform a detailed GEANT4 simulation to optimize the converter thickness towards improving the efficiency of photon conversion. A Monte Carlo based procedure has been developed to simulate the time resolution of the MRPC-based system, making it possible to simulate its response for PET imaging application. The results of the test of a six-gap MRPC, operating in avalanche mode, with 22Na source have been discussed.

  7. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    International Nuclear Information System (INIS)

    Rolison, L; Samant, S; Baciak, J; Jordan, K

    2016-01-01

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  8. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rolison, L; Samant, S; Baciak, J; Jordan, K [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  9. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    Science.gov (United States)

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  10. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Staton, R.; Pukala, J.; Manon, R. [Department of Radiation Oncology, M.D. Anderson Cancer Center, Orlando, 1440 South Orange Avenue, Orlando, Florida 32808 (United States)

    2015-01-15

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  11. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    International Nuclear Information System (INIS)

    Neylon, J.; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A.; Staton, R.; Pukala, J.; Manon, R.

    2015-01-01

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  12. Fast scattering simulation tool for multi-energy x-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Sossin, A., E-mail: artur.sossin@cea.fr [CEA-LETI MINATEC Grenoble, F-38054 Grenoble (France); Tabary, J.; Rebuffel, V. [CEA-LETI MINATEC Grenoble, F-38054 Grenoble (France); Létang, J.M.; Freud, N. [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, Centre Léon Bérard (France); Verger, L. [CEA-LETI MINATEC Grenoble, F-38054 Grenoble (France)

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  13. Recent developments in imaging system assessment methodology, FROC analysis and the search model.

    Science.gov (United States)

    Chakraborty, Dev P

    2011-08-21

    A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.

  14. Recent developments in imaging system assessment methodology, FROC analysis and the search model

    International Nuclear Information System (INIS)

    Chakraborty, Dev P.

    2011-01-01

    A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.

  15. William, a voxel model of child anatomy from tomographic images for Monte Carlo dosimetry calculations

    International Nuclear Information System (INIS)

    Caon, M.

    2010-01-01

    Full text: Medical imaging provides two-dimensional pictures of the human internal anatomy from which may be constructed a three-dimensional model of organs and tissues suitable for calculation of dose from radiation. Diagnostic CT provides the greatest exposure to radiation per examination and the frequency of CT examination is high. Esti mates of dose from diagnostic radiography are still determined from data derived from geometric models (rather than anatomical models), models scaled from adult bodies (rather than bodies of children) and CT scanner hardware that is no longer used. The aim of anatomical modelling is to produce a mathematical representation of internal anatomy that has organs of realistic size, shape and positioning. The organs and tissues are represented by a great many cuboidal volumes (voxels). The conversion of medical images to voxels is called segmentation and on completion every pixel in an image is assigned to a tissue or organ. Segmentation is time consuming. An image processing pack age is used to identify organ boundaries in each image. Thirty to forty tomographic voxel models of anatomy have been reported in the literature. Each model is of an individual, or a composite from several individuals. Images of children are particularly scarce. So there remains a need for more paediatric anatomical models. I am working on segmenting ''William'' who is 368 PET-CT images from head to toe of a seven year old boy. William will be used for Monte Carlo dose calculations of dose from CT examination using a simulated modern CT scanner.

  16. AOD trends during 2001-2010 from observations and model simulations

    Science.gov (United States)

    Pozzer, Andrea; de Meij, Alexander; Yoon, Jongmin; Astitha, Marina

    2016-04-01

    The trend of aerosol optical depth (AOD) between 2001 and 2010 is estimated globally and regionally from remote sensed observations by the MODIS (Moderate Resolution Imaging Spectroradiometer), MISR (Multi-angle Imaging SpectroRadiometer) and SeaWIFS (Sea-viewing Wide Field-of-view Sensor) satellite sensor. The resulting trends have been compared to model results from the EMAC (ECHAM5/MESSy Atmospheric Chemistry {[1]}), model. Although interannual variability is applied only to anthropogenic and biomass-burning emissions, the model is able to quantitatively reproduce the AOD trends as observed by MODIS, while some discrepancies are found when compared to MISR and SeaWIFS. An additional numerical simulation with the same model was performed, neglecting any temporal change in the emissions, i.e. with no interannual variability for any emission source. It is shown that decreasing AOD trends over the US and Europe are due to the decrease in the (anthropogenic) emissions. On contrary over the Sahara Desert and the Middle East region, the meteorological/dynamical changes in the last decade play a major role in driving the AOD trends. Further, over Southeast Asia, both meteorology and emissions changes are equally important in defining AOD trends {[2]}. Finally, decomposing the regional AOD trends into individual aerosol components reveals that the soluble components are the most dominant contributors to the total AOD, as their influence on the total AOD is enhanced by the aerosol water content. {[1]}: Jöckel, P., Kerkweg, A., Pozzer, A., Sander, R., Tost, H., Riede, H., Baumgaertner, A., Gromov, S., and Kern, B.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717-752, doi:10.5194/gmd-3-717-2010, 2010. {[2]}: Pozzer, A., de Meij, A., Yoon, J., Tost, H., Georgoulias, A. K., and Astitha, M.: AOD trends during 2001-2010 from observations and model simulations, Atmos. Chem. Phys., 15, 5521-5535, doi:10.5194/acp-15-5521-2015, 2015.

  17. Improved identification of cranial nerves using paired-agent imaging: topical staining protocol optimization through experimentation and simulation

    Science.gov (United States)

    Torres, Veronica C.; Wilson, Todd; Staneviciute, Austeja; Byrne, Richard W.; Tichauer, Kenneth M.

    2018-03-01

    Skull base tumors are particularly difficult to visualize and access for surgeons because of the crowded environment and close proximity of vital structures, such as cranial nerves. As a result, accidental nerve damage is a significant concern and the likelihood of tumor recurrence is increased because of more conservative resections that attempt to avoid injuring these structures. In this study, a paired-agent imaging method with direct administration of fluorophores is applied to enhance cranial nerve identification. Here, a control imaging agent (ICG) accounts for non-specific uptake of the nerve-targeting agent (Oxazine 4), and ratiometric data analysis is employed to approximate binding potential (BP, a surrogate of targeted biomolecule concentration). For clinical relevance, animal experiments and simulations were conducted to identify parameters for an optimized stain and rinse protocol using the developed paired-agent method. Numerical methods were used to model the diffusive and kinetic behavior of the imaging agents in tissue, and simulation results revealed that there are various combinations of stain time and rinse number that provide improved contrast of cranial nerves, as suggested by optimal measures of BP and contrast-to-noise ratio.

  18. Modelling the transport of optical photons in scintillation detectors for diagnostic and radiotherapy imaging

    Science.gov (United States)

    Roncali, Emilie; Mosleh-Shirazi, Mohammad Amin; Badano, Aldo

    2017-10-01

    Computational modelling of radiation transport can enhance the understanding of the relative importance of individual processes involved in imaging systems. Modelling is a powerful tool for improving detector designs in ways that are impractical or impossible to achieve through experimental measurements. Modelling of light transport in scintillation detectors used in radiology and radiotherapy imaging that rely on the detection of visible light plays an increasingly important role in detector design. Historically, researchers have invested heavily in modelling the transport of ionizing radiation while light transport is often ignored or coarsely modelled. Due to the complexity of existing light transport simulation tools and the breadth of custom codes developed by users, light transport studies are seldom fully exploited and have not reached their full potential. This topical review aims at providing an overview of the methods employed in freely available and other described optical Monte Carlo packages and analytical models and discussing their respective advantages and limitations. In particular, applications of optical transport modelling in nuclear medicine, diagnostic and radiotherapy imaging are described. A discussion on the evolution of these modelling tools into future developments and applications is presented. The authors declare equal leadership and contribution regarding this review.

  19. Thermography During Thermal Test of the Gaia Deployable Sunshield Assembly Qualification Model in the ESTEC Large Space Simulator

    Science.gov (United States)

    Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.

    2012-07-01

    The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.

  20. Model-based T{sub 2} relaxometry using undersampled magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Sumpf, Tilman

    2013-11-01

    T{sub 2} relaxometry refers to the quantitative determination of spin-spin relaxation times in magnetic resonance imaging (MRI). Particularly in clinical diagnostics, the method provides important information about tissue structures and respective pathologic alterations. Unfortunately, it also requires comparatively long measurement times which preclude widespread practical applications. To overcome such limitations, a so-called model-based reconstruction concept has recently been proposed. The method allows for the estimation of spin-density and T{sub 2} parameter maps from only a fraction of the usually required data. So far, promising results have been reported for a radial data acquisition scheme. However, due to technical reasons, radial imaging is only available on a very limited number of MRI systems. The present work deals with the realization and evaluation of different model-based T{sub 2} reconstruction methods that are applicable for the most widely available Cartesian (rectilinear) acquisition scheme. The initial implementation is based on the conventional assumption of a mono-exponential T{sub 2} signal decay. A suitable sampling scheme as well as an automatic scaling procedure are developed, which remove the necessity of manual parameter tuning. As demonstrated for human brain MRI data, the technique allows for a more than 5-fold acceleration of the underlying data acquisition. Furthermore, general limitations and specific error sources are identified and suitable simulation programs are developed for their detailed analysis. In addition to phase variations in image space, the simulations reveal truncation effects as a relevant cause of reconstruction artifacts. To reduce the latter, an alternative model formulation is developed and tested. For noise-free simulated data, the method yields an almost complete suppression of associated artifacts. Residual problems in the reconstruction of experimental MRI data point to the predominant influence of other