WorldWideScience

Sample records for benchmark dose method

  1. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    International Nuclear Information System (INIS)

    Davis, J. Allen; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-01-01

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.

  2. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  3. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  4. Benchmark dose (BMD) modeling: current practice, issues, and challenges.

    Science.gov (United States)

    Haber, Lynne T; Dourson, Michael L; Allen, Bruce C; Hertzberg, Richard C; Parker, Ann; Vincent, Melissa J; Maier, Andrew; Boobis, Alan R

    2018-03-08

    Benchmark dose (BMD) modeling is now the state of the science for determining the point of departure for risk assessment. Key advantages include the fact that the modeling takes account of all of the data for a particular effect from a particular experiment, increased consistency, and better accounting for statistical uncertainties. Despite these strong advantages, disagreements remain as to several specific aspects of the modeling, including differences in the recommendations of the US Environmental Protection Agency (US EPA) and the European Food Safety Authority (EFSA). Differences exist in the choice of the benchmark response (BMR) for continuous data, the use of unrestricted models, and the mathematical models used; these can lead to differences in the final BMDL. It is important to take confidence in the model into account in choosing the BMDL, rather than simply choosing the lowest value. The field is moving in the direction of model averaging, which will avoid many of the challenges of choosing a single best model when the underlying biology does not suggest one, but additional research would be useful into methods of incorporating biological considerations into the weights used in the averaging. Additional research is also needed regarding the interplay between the BMR and the UF to ensure appropriate use for studies supporting a lower BMR than default values, such as for epidemiology data. Addressing these issues will aid in harmonizing methods and moving the field of risk assessment forward.

  5. Benchmark Dose Calculations for Methylmercury-Associated Delays on Evoked Potential Latencies in Two Cohorts of Children

    DEFF Research Database (Denmark)

    Murata, Katsuyuki; Budtz-Jørgensen, Esben; Grandjean, Philippe

    2002-01-01

    Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment......Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment...

  6. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  7. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  8. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...... continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling...... provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios....

  9. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  10. A web application for evaluating Phase I methods using a non-parametric optimal benchmark.

    Science.gov (United States)

    Wages, Nolan A; Varhegyi, Nikole

    2017-10-01

    In evaluating the performance of Phase I dose-finding designs, simulation studies are typically conducted to assess how often a method correctly selects the true maximum tolerated dose under a set of assumed dose-toxicity curves. A necessary component of the evaluation process is to have some concept for how well a design can possibly perform. The notion of an upper bound on the accuracy of maximum tolerated dose selection is often omitted from the simulation study, and the aim of this work is to provide researchers with accessible software to quickly evaluate the operating characteristics of Phase I methods using a benchmark. The non-parametric optimal benchmark is a useful theoretical tool for simulations that can serve as an upper limit for the accuracy of maximum tolerated dose identification based on a binary toxicity endpoint. It offers researchers a sense of the plausibility of a Phase I method's operating characteristics in simulation. We have developed an R shiny web application for simulating the benchmark. The web application has the ability to quickly provide simulation results for the benchmark and requires no programming knowledge. The application is free to access and use on any device with an Internet browser. The application provides the percentage of correct selection of the maximum tolerated dose and an accuracy index, operating characteristics typically used in evaluating the accuracy of dose-finding designs. We hope this software will facilitate the use of the non-parametric optimal benchmark as an evaluation tool in dose-finding simulation.

  11. BMDExpress: a software tool for the benchmark dose analyses of genomic data

    Directory of Open Access Journals (Sweden)

    Thomas Russell S

    2007-10-01

    Full Text Available Abstract Background Dose-dependent processes are common within biological systems and include phenotypic changes following exposures to both endogenous and xenobiotic molecules. The use of microarray technology to explore the molecular signals that underlie these dose-dependent processes has become increasingly common; however, the number of software tools for quantitatively analyzing and interpreting dose-response microarray data has been limited. Results We have developed BMDExpress, a Java application that combines traditional benchmark dose methods with gene ontology classification in the analysis of dose-response data from microarray experiments. The software application is designed to perform a stepwise analysis beginning with a one-way analysis of variance to identify the subset of genes that demonstrate significant dose-response behavior. The second step of the analysis involves fitting the gene expression data to a selection of standard statistical models (linear, 2° polynomial, 3° polynomial, and power models and selecting the model that best describes the data with the least amount of complexity. The model is then used to estimate the benchmark dose at which the expression of the gene significantly deviates from that observed in control animals. Finally, the software application summarizes the statistical modeling results by matching each gene to its corresponding gene ontology categories and calculating summary values that characterize the dose-dependent behavior for each biological process and molecular function. As a result, the summary values represent the dose levels at which genes in the corresponding cellular process show transcriptional changes. Conclusion The application of microarray technology together with the BMDExpress software tool represents a useful combination in characterizing dose-dependent transcriptional changes in biological systems. The software allows users to efficiently analyze large dose

  12. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  13. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    Science.gov (United States)

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  14. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  15. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  16. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  17. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  18. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  19. Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments

    Science.gov (United States)

    Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health

  20. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  1. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  2. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  3. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    ; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species......-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases....... In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The Kmer...

  4. Synthetic Dataset To Benchmark Global Tomographic Methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul

    2006-11-01

    A new set of global synthetic seismograms calculated in a three-dimensional (3-D), heterogeneous, anisotropic, anelastic model of the Earth using the spectral element method has been released by the European network SPICE (Seismic Wave Propagation and Imaging in Complex Media: a European Network). The set consists of 7424 three-component records with a minimum period of 32 seconds, a sampling rate of one second, and a duration of 10,500 seconds. The aim of this synthetic data set is to conduct a blind test of existing global tomographic methods based on long-period data, in order to test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage.

  5. [Impacts that dimethoate inhibited the benchmark dose of acetylcholinesterase based on experimental designs].

    Science.gov (United States)

    He, Xiansong; Li, Tingting; Yi, Nannan; Wu, Hui; Zhao, Minxian; Yao, Xinya; Wang, Cannan

    2013-11-01

    To obtain the impacts of experimental design on benchmark dose (BMD), and the result was applied to test the computer simulation by software Slob (optimal method to calculate the BMD: for a certain sample capacity, to add the experimental groups by reducing the amount of animals in each group) , consequently, this method can be widely used in the future. Eighty adult female SD rats were ig given dimethoate 0.5, 1, 2, 4, 8, 16 and 32 mg/kg for 21 d, respectively. Rats were sacrificed, and acetylcholinesterase (AChE) activity in the hippocampus, cerebral cortex and serum of rats was determined after dimethoate was ig given to rats for 21 d. And then, the software package PROAST28.1 was applied to calculate the BMD. The four does groups of 10 animals (4 x 10 design) and 8 x 5 design were selected from 8 x 10 design to study the impacts of experimental design on BMD. Comparing with the normal control, the significant decline of AChE in hippocampus was observed in 2, 4, 8, 16 and 32 mg/kg groups (P design as the standard, the confidence interval of BMD calculated by both of 4 x 10 design and 8 x 5 design covered the BMD by 8 x 10 design. And also, confidence interval of BMD, calculated by design scheme 1, 2, 3, 4 and 6 of 4 x 10 design, wider than that of 8 x 5 design, but its scheme 5 narrower than 8 x 5 design. To add experimental groups in a certain sample capacity was the optimal method to calculate BMD, but was not the common toxicity experimental design (e. g. set four groups including control, low-dose, moderate-dose, high-dose group).

  6. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  7. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  8. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  9. Simulation of remanent dose rates and benchmark measurements at the CERN-EU high energy reference field facility

    CERN Document Server

    Roesler, S; Donjoux, Y; Mitaroff, Angela

    2003-01-01

    A new approach is presented for the calculation of remanent dose rates from induced radioactivity with the FLUKA Monte-Carlo code. It is based on an explicit calculation of isotope production followed by the transport of photons, positrons, and electrons from the radioactive decay to the point of interest. The approach is benchmarked with a measurement in which samples of different materials were irradiated by the stray radiation field produced by interactions of high-energy hadrons in a copper target. Remanent dose rates were measured at different cooling times with a NaI scintillator-based survey instrument. The results of the simulations are generally in good agreement with the measurements. The method is applied to the prediction of remanent dose rates around the beam cleaning insertions of the LHC. 10 Refs.

  10. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  11. Influence of Distribution of Animals between Dose Groups on Estimated Benchmark Dose and Animal Welfare for Continuous Effects.

    Science.gov (United States)

    Ringblom, Joakim; Kalantari, Fereshteh; Johanson, Gunnar; Öberg, Mattias

    2017-10-30

    The benchmark dose (BMD) approach is increasingly used as a preferred approach for dose-effect analysis, but standard experimental designs are generally not optimized for BMD analysis. The aim of this study was to evaluate how the use of unequally sized dose groups affects the quality of BMD estimates in toxicity testing, with special consideration of the total burden of animal distress. We generated continuous dose-effect data by Monte Carlo simulation using two dose-effect curves based on endpoints with different shape parameters. Eighty-five designs, each with four dose groups of unequal size, were examined in scenarios ranging from low- to high-dose placements and with a total number of animals set to 40, 80, or 200. For each simulation, a BMD value was estimated and compared with the "true" BMD. In general, redistribution of animals from higher to lower dose groups resulted in an improved precision of the calculated BMD value as long as dose placements were high enough to detect a significant trend in the dose-effect data with sufficient power. The improved BMD precision and the associated reduction of the number of animals exposed to the highest dose, where chemically induced distress is most likely to occur, are favorable for the reduction and refinement principles. The result thereby strengthen BMD-aligned design of experiments as a means for more accurate hazard characterization along with animal welfare improvements. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  12. A SPICE synthetic dataset to benchmark global tomographic methods

    Science.gov (United States)

    Qin, Y.; Capdeville, Y.; Maupin, V.; Montagner, J.

    2005-12-01

    The different existing global tomographic methods result in different models of the Earth. Within SPICE (Seismic wave Propagation and Imaging in Complex media: a European network), we have decided to perform a benchmark experiment of global tomographic techniques. A global model has been constructed. It includes 3D heterogeneities in velocity, anisotropy and attenuation, as well as topography of discontinuities. Simplified versions of the model will also be used. Synthetic seismograms will be generated at low frequency by the Spectral Element Method, for a realistic distribution of sources and stations. The synthetic seismograms will be made available to the scientific community at the SPICE website www.spice-rtn.org. Any group wishing to test his tomographic algorithm is encouraged to download the synthetic data.

  13. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  14. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    Science.gov (United States)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  15. Benchmarking burnup reconstruction methods for dynamically operated research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-01

    The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include 148Nd, 137Cs+137Ba, 139La, and 145Nd+146Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.

  16. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  17. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  18. Relative performance of two-stage continual reassessment method in contrast to an optimal benchmark

    Science.gov (United States)

    Wages, Nolan A.; Conaway, Mark R.; O’Quigley, John

    2015-01-01

    Background The two-stage, likelihood-based continual reassessment method (CRM-L; O’Quigley and Shen [1]) entails the specification of a set of design parameters prior to the beginning of its use in a study. The impression of clinicians is that the success of model-based designs, such as CRM-L, depends upon some of the choices made in regard to these specifications, such as the choice of parametric dose-toxicity model and the initial guess of toxicity probabilities. Purpose In studying the efficiency and comparative performance of competing dose-finding designs for finite (typically small) samples, the non-parametric optimal benchmark (O’Quigley, Paoletti and Maccario [2]) is a useful tool. When comparing a dose-finding design to the optimal design, we are able to assess how much room there is for potential improvement. Methods The optimal method, based only on an assumption of monotonicity of the dose-toxicity function, is a valuable theoretical construct serving as a benchmark in theoretical studies, similar to that of a Cramer-Rao bound. We consider the performance of CRM-L under various design specifications and how it compares to the optimal design across a range of practical situations. Results Using simple recommendations for design specifications, the CRM-L will produce performances, in terms of identifying doses at and around the MTD, that are close to the optimal method on average over a broad group of dose-toxicity scenarios. Limitations Although the simulation settings vary the number of doses considered, the target toxicity rate and the sample size, the results here are presented for a small, though widely-used, set of two-stage CRM designs. Conclusions Based on simulations here, and many others not shown, CRM-L is almost as accurate, in many scenarios, as the (unknown and unavailable) optimal design. On average, there appears to be very little margin for improvement. Even if a finely tuned skeleton [3] offers some improvement over a simple skeleton

  19. Netherlands contribution to the EC project: Benchmark exercise on dose estimation in a regulatory context

    International Nuclear Information System (INIS)

    Stolk, D.J.

    1987-04-01

    On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs

  20. Study on the shipboard radar reconnaissance equipment azimuth benchmark method

    Science.gov (United States)

    Liu, Zhenxing; Jiang, Ning; Ma, Qian; Liu, Songtao; Wang, Longtao

    2015-10-01

    The future naval battle will take place in a complex electromagnetic environment. Therefore, seizing the electromagnetic superiority has become the major actions of the navy. Radar reconnaissance equipment is an important part of the system to obtain and master battlefield electromagnetic radiation source information. Azimuth measurement function is one of the main function radar reconnaissance equipments. Whether the accuracy of direction finding meets the requirements, determines the vessels successful or not active jamming, passive jamming, guided missile attack and other combat missions, having a direct bearing on the vessels combat capabilities . How to test the performance of radar reconnaissance equipment, while affecting the task as little as possible is a problem. This paper, based on radar signal simulator and GPS positioning equipment, researches and experiments on one new method, which povides the azimuth benchmark required by the direction-finding precision test anytime anywhere, for the ships at jetty to test radar reconnaissance equipment performance in direction-finding. It provides a powerful means for the naval radar reconnaissance equipments daily maintenance and repair work[1].

  1. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William

    2012-01-01

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  2. Three anisotropic benchmark problems for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Čertík, O.; Korous, L.

    2013-01-01

    Roč. 219, č. 13 (2013), s. 7286-7295 ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013

  3. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  4. Method of preparing radionuclide doses

    International Nuclear Information System (INIS)

    Kuperus, J.H.

    1987-01-01

    A method is described of preparing aliquot dosea of a tracer material useful in diagnostic nuclear medicine comprising: storing discrete quantities of a lyophilized radionuclide carrier in separate tubular containers from which air and moisture is excluded, selecting from the tubular containers a container in which is stored a carrier appropriate for a nuclear diagnostic test to be performed, interposing the selected container between the needle and the barrel of a hypodermic syringe, and drawing a predetermined amount of a liquid containing a radionuclide tracer in known concentration into the hypodermic syringe barrel through the hypodermic needle and through the selected container to dissolve the discrete quantity of lyophilized carrier therein to combine the carrier with the radionuclide tracer to form an aliquot dose of nuclear diagnostic tracer material, as needed

  5. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. The Global Benchmarking as a Method of Countering the Intellectual Migration in Ukraine

    Directory of Open Access Journals (Sweden)

    Striy Lуbov A.

    2017-05-01

    Full Text Available The publication is aimed at studying the global benchmarking as a method of countering the intellectual migration in Ukraine. The article explores the intellectual process of migration in Ukraine; the current status of the country in the light of crisis and all the problems that arose has been analyzed; statistical data on the migration process are provided, the method of countering it has been determined; types of benchmarking have been considered; the benchmarking method as a way of achieving objective has been analyzed; the benefits to be derived from this method have been determined, as well as «bottlenecks» in the State process of regulating migratory flows, not only to call attention to, but also take corrective actions.

  7. Radiation transport benchmarks for simple geometries with void regions using the spherical harmonics method

    International Nuclear Information System (INIS)

    Kobayashi, K.

    2009-01-01

    In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)

  8. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  9. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  10. A SPICE Blind Test to Benchmark Global Tomographic Methods

    Science.gov (United States)

    Qin, Y.; Capdeville, Y.; Maupin, V.; Montagner, J.

    2006-12-01

    The different existing global tomographic methods result in different models of the Earth. In order to test how current imaging techniques are limited by approximations in theory and the inadequacy of data quality and coverage, we are undertaking a blind test of global inversion algorithms using complete 3D synthetic seismograms within SPICE (Seismic wave Propagation and Imaging in Complex media: a European network). First, a complex global anisotropic anelastic model has been constructed by summing the 1D reference model, deterministic and random anomalies and anisotropic crystal. This model includes 3D heterogeneities in velocity, anisotropy and attenuation at different scales in the whole mantle, as well as topography and crustal structure. In addition, the rotation and ellipticity are also included. Synthetic seismograms were generated using the Coupling Spectral Element Method with a minimum period of 32s, for a realistic distribution of 29 events and 256 stations. The synthetic seismograms have been made available to the scientific community worldwide at the IPGP website http://www.ipgp.jussieu.fr/~qyl/. Any group willing to test his tomographic technique is encouraged to download the synthetic dataset.g

  11. Benchmarking lattice physics data and methods for boiling water reactor analysis

    International Nuclear Information System (INIS)

    Cacciapouti, R.J.; Edenius, M.; Harris, D.R.; Hebert, M.J.; Kapitz, D.M.; Pilat, E.E.; VerPlanck, D.M.

    1983-01-01

    The objective of the work reported was to verify the adequacy of lattice physics modeling for the analysis of the Vermont Yankee BWR using a multigroup, two-dimensional transport theory code. The BWR lattice physics methods have been benchmarked against reactor physics experiments, higher order calculations, and actual operating data

  12. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  13. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  14. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...

  15. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    International Nuclear Information System (INIS)

    Newhauser, Wayne D.; Rechner, Laura; Mirkovic, Dragan; Yepes, Pablo; Koch, Nicholas C.; Titt, Uwe; Fontenot, Jonas D.; Zhang, Rui

    2013-01-01

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy

  16. Estimate of safe human exposure levels for lunar dust based on comparative benchmark dose modeling.

    Science.gov (United States)

    James, John T; Lam, Chiu-Wing; Santana, Patricia A; Scully, Robert R

    2013-04-01

    Brief exposures of Apollo astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure to lunar dust. The United States and other space faring nations intend to return to the moon for extensive exploration within a few decades. In the meantime, habitats for that exploration, whether mobile or fixed, must be designed to limit human exposure to lunar dust to safe levels. Herein we estimate safe exposure limits for lunar dust collected during the Apollo 14 mission. We instilled three respirable-sized (∼2 μ mass median diameter) lunar dusts (two ground and one unground) and two standard dusts of widely different toxicities (quartz and TiO₂) into the respiratory system of rats. Rats in groups of six were given 0, 1, 2.5 or 7.5 mg of the test dust in a saline-Survanta® vehicle, and biochemical and cellular biomarkers of toxicity in lung lavage fluid were assayed 1 week and one month after instillation. By comparing the dose--response curves of sensitive biomarkers, we estimated safe exposure levels for astronauts and concluded that unground lunar dust and dust ground by two different methods were not toxicologically distinguishable. The safe exposure estimates were 1.3 ± 0.4 mg/m³ (jet-milled dust), 1.0 ± 0.5 mg/m³ (ball-milled dust) and 0.9 ± 0.3 mg/m³ (unground, natural dust). We estimate that 0.5-1 mg/m³ of lunar dust is safe for periodic human exposures during long stays in habitats on the lunar surface.

  17. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    Science.gov (United States)

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  19. Methods of bone marrow dose calculation

    International Nuclear Information System (INIS)

    Taboaco, R.C.

    1982-02-01

    Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author) [pt

  20. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  1. Gamma dose estimation with the thermoluminescence method

    International Nuclear Information System (INIS)

    Kumamoto, Yoshikazu

    1994-01-01

    Absorbed dose in radiation accidents can be estimated with the aid of materials which have the ability of dose recording and were exposed during the accidents. Quartz in bricks and tiles used to construct the buildings has the thermoluminescent properties. These materials exposed to radiations emit light when heated. Quartz and ruby have been used for the estimation of dose. The requirements for such dosemeters include; (1)the high kiln temperature at which all thermoluminescent energies accrued from natural radiations are erased. (2)the negligible fading of thermoluminescent energies after the exposure to radiations. (3)the determination of dose from natural radiations after the making of the matcrials. (4)the geometry of the place at which materials are collected. Bricks or tiles are crushed in the motar, sieved into sizes, washed with HF, HCl, alchol, aceton and water, and given with a known calibration dose. The pre-dose method and high-temperature method are used. In the former, glow curves with and without calibration dose are recorded. In the latter, glow peaks at 110degC with and without calibration dose are recorded after the heating of quartz up to 500degC. In this report, the method of the sample preparation, the measurement procedures and the results of dose estimation in the atomic bombing, iridium-192 and Chernobyl accident are described. (author)

  2. Consortial Benchmarking: a method of academic-practitioner collaborative research and its application in a B2B environment

    NARCIS (Netherlands)

    Schiele, Holger; Krummaker, Stefan

    2010-01-01

    Purpose of the paper and literature addressed: Development of a new method for academicpractitioner collaboration, addressing the literature on collaborative research Research method: Model elaboration and test with an in-depth case study Research findings: In consortial benchmarking, practitioners

  3. Methods of assessing total doses integrated across pathways

    Energy Technology Data Exchange (ETDEWEB)

    Grzechnik, M.; Camplin, W.; Clyne, F. [Centre for Environment, Fisheries and Aquaculture Science, Lowestoft (United Kingdom); Allott, R. [Environment Agency, London (United Kingdom); Webbe-Wood, D. [Food Standards Agency, London (United Kingdom)

    2006-07-01

    years. C) Construct Individuals with high rates of consumption or occupancy across all pathways are used to derive rates for each pathway. These are applied in future years. D) Top-Two High and average consumption and occupancy rates for each pathway are derived. Doses can be calculated for all combinations where two pathways are considered at high rates and the remainder as average E) Profiling A profile is derived by calculating consumption and occupancy rates for each pathway for individuals who exhibit high rates for a single pathway. Other profiles may be built by repeating for other pathways. Total dose is the highest dose for any profile, andt profile becomes known as the critical group. Method A was used as a benchmark, with methods B -E compared according to the previously specified criteria. Overall the profiling method of total dose calculation was adopted due to its favourable overall comparison with the individual method and the homogeneity of the critical group selected. (authors)

  4. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    Science.gov (United States)

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  5. Experimental depth dose curves of a 67.5 MeV proton beam for benchmarking and validation of Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Faddegon, Bruce A., E-mail: bfaddegon@radonc.ucsf.edu; Ramos-Méndez, José; Daftari, Inder K. [Department of Radiation Oncology, University of California San Francisco, 1600 Divisadero Street, Suite H1031, San Francisco, California 94143 (United States); Shin, Jungwook [St. Jude Children’s Research Hospital, 252 Danny Thomas Place, Memphis, Tennessee 38105 (United States); Castenada, Carlos M. [Crocker Nuclear Laboratory, University of California Davis, 1 Shields Avenue, Davis, California 95616 (United States)

    2015-07-15

    Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with GEANT4 using TOPAS. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the

  6. Benchmarking residual dose rates in a NuMI-like environment

    Energy Technology Data Exchange (ETDEWEB)

    Igor L. Rakhno et al.

    2001-11-02

    Activation of various structural and shielding materials is an important issue for many applications. A model developed recently to calculate residual activity of arbitrary composite materials for arbitrary irradiation and cooling times is presented in the paper. Measurements have been performed at the Fermi National Accelerator Laboratory using a 120 GeV proton beam to study induced radioactivation of materials used for beam line components and shielding. The calculated residual dose rates for the samples studied behind the target and outside of the thick shielding are presented and compared with the measured ones. Effects of energy spectra, sample material and dimensions, their distance from the shielding, and gaps between the shielding modules and walls as well as between the modules themselves were studied in detail.

  7. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  8. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4

    International Nuclear Information System (INIS)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-01-01

    The expanding clinical use of low-energy photon emitting 125 I and 103 Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst ±5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately ±2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV

  9. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  10. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  11. Piping benchmark problems: dynamic analysis independent support motion response spectrum method

    International Nuclear Information System (INIS)

    Bezler, P.; Subudhi, M.; Hartzman, M.

    1985-08-01

    Four benchmark problems and solutions were developed for verifying the adequacy of computer programs used for the dynamic analysis and design of elastic piping systems by the independent support motion, response spectrum method. The dynamic loading is represented by distinct sets of support excitation spectra assumed to be induced by non-uniform excitation in three spatial directions. Complete input descriptions for each problem are provided and the solutions include predicted natural frequencies, participation factors, nodal displacements and element forces for independent support excitation and also for uniform envelope spectrum excitation. Solutions to the associated anchor point pseudo-static displacements are not included

  12. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  13. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...... biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological...

  14. Comprehensive benchmarking of Markov chain Monte Carlo methods for dynamical systems.

    Science.gov (United States)

    Ballnus, Benjamin; Hug, Sabine; Hatz, Kathrin; Görlitz, Linus; Hasenauer, Jan; Theis, Fabian J

    2017-06-24

    In quantitative biology, mathematical models are used to describe and analyze biological processes. The parameters of these models are usually unknown and need to be estimated from experimental data using statistical methods. In particular, Markov chain Monte Carlo (MCMC) methods have become increasingly popular as they allow for a rigorous analysis of parameter and prediction uncertainties without the need for assuming parameter identifiability or removing non-identifiable parameters. A broad spectrum of MCMC algorithms have been proposed, including single- and multi-chain approaches. However, selecting and tuning sampling algorithms suited for a given problem remains challenging and a comprehensive comparison of different methods is so far not available. We present the results of a thorough benchmarking of state-of-the-art single- and multi-chain sampling methods, including Adaptive Metropolis, Delayed Rejection Adaptive Metropolis, Metropolis adjusted Langevin algorithm, Parallel Tempering and Parallel Hierarchical Sampling. Different initialization and adaptation schemes are considered. To ensure a comprehensive and fair comparison, we consider problems with a range of features such as bifurcations, periodical orbits, multistability of steady-state solutions and chaotic regimes. These problem properties give rise to various posterior distributions including uni- and multi-modal distributions and non-normally distributed mode tails. For an objective comparison, we developed a pipeline for the semi-automatic comparison of sampling results. The comparison of MCMC algorithms, initialization and adaptation schemes revealed that overall multi-chain algorithms perform better than single-chain algorithms. In some cases this performance can be further increased by using a preceding multi-start local optimization scheme. These results can inform the selection of sampling methods and the benchmark collection can serve for the evaluation of new algorithms. Furthermore, our

  15. A Benchmark for Comparing Precision Medicine Methods in Thyroid Cancer Diagnosis using Tissue Microarrays.

    Science.gov (United States)

    Wang, Ching-Wei; Lee, Yu-Ching; Calista, Evelyne; Zhou, Fan; Zhu, Hongtu; Suzuki, Ryohei; Komura, Daisuke; Ishikawa, Shumpei; Cheng, Shih-Ping

    2017-12-23

    The aim of precision medicine is to harness new knowledge and technology to optimize the timing and targeting of interventions for maximal therapeutic benefit. This study explores the possibility of building AI models without precise pixel-level annotation in prediction of the tumor size, extrathyroidal extension, Lymph node metastasis, cancer stage and BRAF mutation in thyroid cancer diagnosis, providing the patients' background information, histopathological and immunohistochemical tissue images. A novel framework for objective evaluation of automatic patient diagnosis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2017 - A Grand Challenge for Tissue Microarray Analysis in Thyroid Cancer Diagnosis. Here, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the data repository of tissue microarrays, the creation of the clinical diagnosis classification data repository of thyroid cancer, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, three automatic methods for predictions of the five clinical outcomes have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic patient diagnosis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/˜cvmi/ISBI2017/). cweiwang@mail.ntust.edu.tw. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data.

    Science.gov (United States)

    Clausen, Philip T L C; Zankari, Ea; Aarestrup, Frank M; Lund, Ole

    2016-09-01

    Next generation sequencing (NGS) may be an alternative to phenotypic susceptibility testing for surveillance and clinical diagnosis. However, current bioinformatics methods may be associated with false positives and negatives. In this study, a novel mapping method was developed and benchmarked to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance was compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads. This indicates that information might be lost during assembly. KmerResistance performed significantly better than the other methods, when data were contaminated or only contained few sequence reads. Read mapping is superior to assembly-based methods and the new KmerResistance seemingly outperforms currently available methods particularly when including datasets with few reads. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  18. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    NARCIS (Netherlands)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai; Popescu, Elvira; Rehm, Matthias; Mealha, Oscar

    2017-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method

  19. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  20. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  1. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  2. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  3. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1988-01-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  4. A method of transferring G.T.S. benchmark value to survey area using electronic total station

    Digital Repository Service at National Institute of Oceanography (India)

    Ganesan, P.

    may be at far away distance from the area to be surveyed. In these cases, the most common traditional method of transferring the benchmark value using an automatic level instrument is a difficult task, consuming enormous amount of time and labour...

  5. Anomaly detection in OECD Benchmark data using co-variance methods

    International Nuclear Information System (INIS)

    Srinivasan, G.S.; Krinizs, K.; Por, G.

    1993-02-01

    OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab

  6. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data

    International Nuclear Information System (INIS)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-01-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)

  7. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  8. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  9. Characterization of the dynamic friction of woven fabrics: Experimental methods and benchmark results

    NARCIS (Netherlands)

    Sachs, Ulrich; Akkerman, Remko; Fetfatsidis, K.; Vidal-Sallé, E.; Schumacher, J.; Ziegmann, G.; Allaoui, S.; Hivet, G.; Maron, B.; Vanclooster, K.; Lomov, S.V.

    2014-01-01

    A benchmark exercise was conducted to compare various friction test set-ups with respect to the measured coefficients of friction. The friction was determined between Twintex®PP, a fabric of commingled yarns of glass and polypropylene filaments, and a metal surface. The same material was supplied to

  10. Characterization of mechanical behavior of woven fabrics: experimental methods and benchmark results

    NARCIS (Netherlands)

    Cao, J.; Akkerman, Remko; Boisse, P.; Chen, J.; Cheng, H.S.; de Graaf, E.F.; Gorczyca, J.L.; Harrison, P.

    2008-01-01

    Textile composites made of woven fabrics have demonstrated excellent mechanical properties for the production of high specific-strength products. Research efforts in the woven fabric sheet forming are currently at a point where benchmarking will lead to major advances in understanding both the

  11. Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method

    Science.gov (United States)

    Wu, J. Z.; Fang, L.

    2018-02-01

    Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.

  12. Control volume method for the thermal convection problem in a rotating spherical shell: test on the benchmark solution

    Czech Academy of Sciences Publication Activity Database

    Hejda, Pavel; Reshetnyak, M.

    2004-01-01

    Roč. 48, č. 4 (2004), s. 741-746 ISSN 0039-3169 R&D Projects: GA AV ČR KSK3012103 Grant - others:RFFR(RU) 03-05-64074; EC(XE) HPRI-CT-1999-00026 Institutional research plan: CEZ:AV0Z3012916 Keywords : liquid core * dynamo benchmark * finite volume method Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.447, year: 2004

  13. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  14. Application of benchmark dose modeling to protein expression data in the development and analysis of mode of action/adverse outcome pathways for testicular toxicity.

    Science.gov (United States)

    Chepelev, Nikolai L; Meek, M E Bette; Yauk, Carole Lyn

    2014-11-01

    Reliable quantification of gene and protein expression has potential to contribute significantly to the characterization of hypothesized modes of action (MOA) or adverse outcome pathways for critical effects of toxicants. Quantitative analysis of gene expression by benchmark dose (BMD) modeling has been facilitated by the development of effective software tools. In contrast, protein expression is still generally quantified by a less robust effect level (no or lowest [adverse] effect levels) approach, which minimizes its potential utility in the consideration of dose-response and temporal concordance for key events in hypothesized MOAs. BMD modeling is applied here to toxicological data on testicular toxicity to investigate its potential utility in analyzing protein expression relevant to the proposed MOA to inform human health risk assessment. The results illustrate how the BMD analysis of protein expression in animal tissues in response to toxicant exposure: (1) complements other toxicity data, and (2) contributes to consideration of the empirical concordance of dose-response relationships, as part of the weight of evidence for hypothesized MOAs to facilitate consideration and application in regulatory risk assessment. Lack of BMD analysis in proteomics has likely limited its use for these purposes. This paper illustrates the added value of BMD modeling to support and strengthen hypothetical MOAs as a basis to facilitate the translation and uptake of the results of proteomic research into risk assessment. Copyright © 2014 Her Majesty the Queen in Right of Canada. Journal of Applied Toxicology © 2014 John Wiley & Sons, Ltd.

  15. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    DEFF Research Database (Denmark)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai

    2016-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method...... by comparing and integrating the data collected in several European Campuses during two different academic years, 2014-15 and 2015-16. The overall results are: a) a more adequate and robust definition of the orthogonal multidimensional space of representation of the smartness, and b) the definition...

  16. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  17. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  18. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  19. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  20. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD and Multiple Approaches for Selection of Chemical Point of Departure (PoD.

    Directory of Open Access Journals (Sweden)

    A Francina Webster

    Full Text Available Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD values derived from toxicogenomics data be used as point of departure (PoD values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd and carcinogenic (4, 8 mkd doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses.

  1. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  2. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  3. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  4. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  5. A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms.

    Science.gov (United States)

    Mayo, Charles S; Urie, Marcia M

    2003-01-01

    Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a "donut hole" (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the "hottest" 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated.

  6. Improvement of dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Onda, Takashi; Yoshida, Yoshitaka; Kudo, Seiichi; Nishimura, Kazuya

    2003-01-01

    It is expected that the selection of access routes for employees who engage in emergency work at a severe accident in a nuclear power plant makes a difference in their radiation dose values. In order to examine how much difference arises in the dose by the selection of the access routes, in the case of a severe accident in a pressurized water reactor plant, we improved the method to obtain the dose for employees and expanded the analyzing system. By the expansion of the system and the improvement of the method, we have realized the followings: (1) in the whole plant area, the dose evaluation is possible, (2) the efficiency of calculation is increased by the reduction of the number of radiation sources, etc, and (3) the function is improved by introduction of the sky shine calculation into the highest floor, etc. The improved system clarifies the followings: (1) the doses change by selected access routes, and this system can give the difference in the doses quantitatively, and (2) in order to suppress the dose, it is effective to choose the most adequate access route for the employees. (author)

  7. Method for dose calculation in intracavitary irradiation of endometrical carcinoma

    International Nuclear Information System (INIS)

    Zevrieva, I.F.; Ivashchenko, N.T.; Musapirova, N.A.; Fel'dman, S.Z.; Sajbekov, T.S.

    1979-01-01

    A method for dose calculation for the conditions of intracavitary gamma therapy of endometrial carcinoma using spherical and linear 60 Co sources was elaborated. Calculations of dose rates for different amount and orientation of spherical radiation sources and for different planes were made with the aid of BEhSM-4M computer. Dosimet were made with the aid of BEhSM-4M computer. Dosimetric study of dose fields was made using a phantom imitating the real conditions of irradiation. Discrepancies between experimental and calculated values are within the limits of the experiment accuracy

  8. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller

    2016-01-01

    with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance...... to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... was compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads...

  9. Benchmarking and application of the state-of-the-art uncertainty analysis methods XSUSA and SHARK-X

    International Nuclear Information System (INIS)

    Aures, A.; Bostelmann, F.; Hursin, M.; Leray, O.

    2017-01-01

    Highlights: • Application of the uncertainty analysis methods XSUSA and SHARK-X. • Propagation of nuclear data uncertainty through PWR pin cell depletion calculation. • Uncertainty quantification of eigenvalue, nuclide densities and Doppler coefficient. • Top contributor to overall output uncertainty by sensitivity analysis. • Comparison with SAMPLER and TSUNAMI of the SCALE code package. - Abstract: This study presents collaborative work performed between GRS and PSI on benchmarking and application of the state-of-the-art uncertainty analysis methods XSUSA and SHARK-X. Applied to a PWR pin cell depletion calculation, both methods propagate input uncertainty from nuclear data to output uncertainty. The uncertainty of the multiplication factors, nuclide densities, and fuel temperature coefficients derived by both methods are compared at various burnup steps. Comparisons of these quantities are furthermore performed with the SAMPLER module of SCALE 6.2. The perturbation theory based TSUNAMI module of both SCALE 6.1 and SCALE 6.2 is additionally applied for comparisons of the reactivity coefficient.

  10. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    International Nuclear Information System (INIS)

    Suwazono, Yasushi; Nogawa, Kazuhiro; Uetani, Mirei; Nakada, Satoru; Kido, Teruhiko; Nakagawa, Hideaki

    2011-01-01

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  11. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)

    2011-02-15

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  12. Benchmarking pKa prediction methods for Lys115 in acetoacetate decarboxylase.

    Science.gov (United States)

    Liu, Yuli; Patel, Anand H G; Burger, Steven K; Ayers, Paul W

    2017-05-01

    Three different pK a prediction methods were used to calculate the pK a of Lys115 in acetoacetate decarboxylase (AADase): the empirical method PROPKA, the multiconformation continuum electrostatics (MCCE) method, and the molecular dynamics/thermodynamic integration (MD/TI) method with implicit solvent. As expected, accurate pK a prediction of Lys115 depends on the protonation patterns of other ionizable groups, especially the nearby Glu76. However, since the prediction methods do not explicitly sample the protonation patterns of nearby residues, this must be done manually. When Glu76 is deprotonated, all three methods give an incorrect pK a value for Lys115. If protonated, Glu76 is used in an MD/TI calculation, the pK a of Lys115 is predicted to be 5.3, which agrees well with the experimental value of 5.9. This result agrees with previous site-directed mutagenesis studies, where the mutation of Glu76 (negative charge when deprotonated) to Gln (neutral) causes no change in K m , suggesting that Glu76 has no effect on the pK a shift of Lys115. Thus, we postulate that the pK a of Glu76 is also shifted so that Glu76 is protonated (neutral) in AADase. Graphical abstract Simulated abundances of protonated species as pH is varied.

  13. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1993-05-01

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δ p ) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  14. Two gamma dose evaluation methods for silicon semiconductor detector

    International Nuclear Information System (INIS)

    Chen Faguo; Jin Gen; Yang Yapeng; Xu Yuan

    2011-01-01

    Silicon PIN diodes have been widely used as personal and areal dosimeters because of their small volume, simplicity and real-time operation. However, because silicon is neither a tissue-equivalent nor an air-equivalent material, an intrinsic disadvantage for silicon dosimeters is that a significant over-response occurs at low-energy region, especially below 200 keV. Using a energy compensation filter to flatten the energy response is one method overcoming this disadvantage. But for dose compensation method, the estimated dose depends only on the number of the detector pulses. So a weight function method was introduced to evaluate gamma dose, which depends on pulse number as well as its amplitude. (authors)

  15. Benchmarking the DFT+U method for thermochemical calculations of uranium molecular compounds and solids.

    Science.gov (United States)

    Beridze, George; Kowalski, Piotr M

    2014-12-18

    Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.

  16. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    Science.gov (United States)

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi

  17. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  18. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    Malin, Wahlberg; Imre, Pazsit

    2005-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  19. Comparison of organ dosimetry methods and effective dose calculation methods for paediatric CT.

    Science.gov (United States)

    Brady, Z; Cain, T M; Johnston, P N

    2012-06-01

    Computed tomography (CT) is the single biggest ionising radiation risk from anthropogenic exposure. Reducing unnecessary carcinogenic risks from this source requires the determination of organ and tissue absorbed doses to estimate detrimental stochastic effects. In addition, effective dose can be used to assess comparative risk between exposure situations and facilitate dose reduction through optimisation. Children are at the highest risk from radiation induced carcinogenesis and therefore dosimetry for paediatric CT recipients is essential in addressing the ionising radiation health risks of CT scanning. However, there is no well-defined method in the clinical environment for routinely and reliably performing paediatric CT organ dosimetry and there are numerous methods utilised for estimating paediatric CT effective dose. Therefore, in this study, eleven computational methods for organ dosimetry and/or effective dose calculation were investigated and compared with absorbed doses measured using thermoluminescent dosemeters placed in a physical anthropomorphic phantom representing a 10 year old child. Three common clinical paediatric CT protocols including brain, chest and abdomen/pelvis examinations were evaluated. Overall, computed absorbed doses to organs and tissues fully and directly irradiated demonstrated better agreement (within approximately 50 %) with the measured absorbed doses than absorbed doses to distributed organs or to those located on the periphery of the scan volume, which showed up to a 15-fold dose variation. The disparities predominantly arose from differences in the phantoms used. While the ability to estimate CT dose is essential for risk assessment and radiation protection, identifying a simple, practical dosimetry method remains challenging.

  20. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E γ , σ y , the asymmetry factor - σ y /σ z , the height of puff center - H and the distance from puff center R xy . To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  1. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  2. What is the best practice for benchmark regulation of electricity distribution? Comparison of DEA, SFA and StoNED methods

    International Nuclear Information System (INIS)

    Kuosmanen, Timo; Saastamoinen, Antti; Sipiläinen, Timo

    2013-01-01

    Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation

  3. Method to account for dose fractionation in analysis of IMRT plans: Modified equivalent uniform dose

    International Nuclear Information System (INIS)

    Park, Clinton S.; Kim, Yongbok; Lee, Nancy; Bucci, Kara M.; Quivey, Jeanne M.; Verhey, Lynn J.; Xia Ping

    2005-01-01

    Purpose: To propose a modified equivalent uniform dose (mEUD) to account for dose fractionation using the biologically effective dose without losing the advantages of the generalized equivalent uniform dose (gEUD) and to report the calculated mEUD and gEUD in clinically used intensity-modulated radiotherapy (IMRT) plans. Methods and Materials: The proposed mEUD replaces the dose to each voxel in the gEUD formulation by a biologically effective dose with a normalization factor. We propose to use the term mEUD D o /n o that includes the total dose (D o ) and number of fractions (n o ) and to use the term mEUD o that includes the same total dose but a standard fraction size of 2 Gy. A total of 41 IMRT plans for patients with nasopharyngeal cancer treated at our institution between October 1997 and March 2002 were selected for the study. The gEUD and mEUD were calculated for the planning gross tumor volume (pGTV), planning clinical tumor volume (pCTV), parotid glands, and spinal cord. The prescription dose for these patients was 70 Gy to >95% of the pGTV and 59.4 Gy to >95% of the pCTV in 33 fractions. Results: The calculated average gEUD was 72.2 ± 2.4 Gy for the pGTV, 54.2 ± 7.1 Gy for the pCTV, 26.7 ± 4.2 Gy for the parotid glands, and 34.1 ± 6.8 Gy for the spinal cord. The calculated average mEUD D o /n o using 33 fractions was 71.7 ± 3.5 Gy for mEUD 70/33 of the pGTV, 49.9 ± 7.9 Gy for mEUD 59.5/33 of the pCTV, 27.6 ± 4.8 Gy for mEUD 26/33 of the parotid glands, and 32.7 ± 7.8 Gy for mEUD 45/33 of the spinal cord. Conclusion: The proposed mEUD, combining the gEUD with the biologically effective dose, preserves all advantages of the gEUD while reflecting the fractionation effects and linear and quadratic survival characteristics

  4. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    Brochi, M.A.C.

    1990-01-01

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  5. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    Science.gov (United States)

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  6. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  7. Comparing Methods to Denote Treatment Outcome in Clinical Research and Benchmarking Mental Health Care.

    Science.gov (United States)

    de Beurs, Edwin; Barendregt, Marko; de Heer, Arco; van Duijn, Erik; Goeree, Bob; Kloos, Margot; Kooiman, Kees; Lionarons, Helen; Merks, Andre

    2016-07-01

    Approaches based on continuous indicators (the size of the pre-to-post-test change; effect size or ΔT) and on categorical indicators (Percentage Improvement and the Jacobson-Truax approach to Clinical Significance) are evaluated to determine which has the best methodological and statistical characteristics, and optimal performance, in comparing outcomes of treatment providers. Performance is compared in two datasets from providers using the Brief Symptom Inventory or the Outcome Questionnaire. Concordance of methods and their suitability to rank providers is assessed. Outcome indicators tend to converge and lead to a similar ranking of institutes within each dataset. Statistically and conceptually, continuous outcome indicators are superior to categorical outcomes as change scores have more statistical power and allow for a ranking of providers at first glance. However, the Jacobson-Truax approach can complement the change score approach as it presents outcome information in a clinically meaningful manner. Copyright © 2015 John Wiley & Sons, Ltd. When comparing various indicators or treatment outcome, statistical considerations designate continuous outcomes, such as the effect size of the pre-post change (effect size or ΔT) as the optimal choice. Expressing outcome in proportions of recovered, changed, unchanged or deteriorated patients has supplementary value, as it is more easily interpreted and appreciated by clinicians, managerial staff and, last but not the least, by patients. If categorical outcomes are used with small datasets, true differences in institutional performance may get obscured due to diminished power to detect differences. With sufficient data, outcome according to continuous and categorical indicators converge and lead to similar rankings of institutes' performance. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  9. Robust EM Continual Reassessment Method in Oncology Dose Finding

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  10. The D1 method: career dose estimation from a combination of historical monitoring data and a single year's dose data

    International Nuclear Information System (INIS)

    Sont, W.N.

    1995-01-01

    A method is introduced to estimate career doses from a combination of historical monitoring data and a single year's dose data. This method, called D1 eliminates the bias arising from incorporating historical dose data from times when occupational doses were generally much higher than they are today. Doses calculated by this method are still conditional on the preservation of the status quo in the effectiveness of radiation protection. The method takes into account the variation of the annual dose, and of the probability of being monitored, with the time elapsed since the start of a career. It also allows for the calculation of a standard error of the projected career dose. Results from recent Canadian dose data are presented. (author)

  11. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de

    1998-01-01

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  12. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)

    2017-02-15

    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  13. Fully automated treatment planning for head and neck radiotherapy using a voxel-based dose prediction and dose mimicking method

    Science.gov (United States)

    McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.

    2017-08-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment

  14. Basis for dose rate to curie assay method

    Energy Technology Data Exchange (ETDEWEB)

    Gedeon, S.R.

    1996-10-31

    Disposition of low-level solid waste packages at the Hanford Site requires quantifying the radioactive contents of each container. This study generated conversion factors to apply to the results of contact surveys that are performed with standard dose rate survey instruments by field health physics technicians. This study determined the accuracy of this method, and identified the major sources of uncertainty. It is concluded that the dominant error is associated with the possibility that the radioactive source is not homogeneously distributed.

  15. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    International Nuclear Information System (INIS)

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.

    2014-01-01

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs

  16. Benchmarking a computational design method for the incorporation of metal ion-binding sites at symmetric protein interfaces.

    Science.gov (United States)

    Hansen, William A; Khare, Sagar D

    2017-08-01

    The design of novel metal-ion binding sites along symmetric axes in protein oligomers could provide new avenues for metalloenzyme design, construction of protein-based nanomaterials and novel ion transport systems. Here, we describe a computational design method, symmetric protein recursive ion-cofactor sampling (SyPRIS), for locating constellations of backbone positions within oligomeric protein structures that are capable of supporting desired symmetrically coordinated metal ion(s) chelated by sidechains (chelant model). Using SyPRIS on a curated benchmark set of protein structures with symmetric metal binding sites, we found high recovery of native metal coordinating rotamers: in 65 of the 67 (97.0%) cases, native rotamers featured in the best scoring model while in the remaining cases native rotamers were found within the top three scoring models. In a second test, chelant models were crossmatched against protein structures with identical cyclic symmetry. In addition to recovering all native placements, 10.4% (8939/86013) of the non-native placements, had acceptable geometric compatibility scores. Discrimination between native and non-native metal site placements was further enhanced upon constrained energy minimization using the Rosetta energy function. Upon sequence design of the surrounding first-shell residues, we found further stabilization of native placements and a small but significant (1.7%) number of non-native placement-based sites with favorable Rosetta energies, indicating their designability in existing protein interfaces. The generality of the SyPRIS approach allows design of novel symmetric metal sites including with non-natural amino acid sidechains, and should enable the predictive incorporation of a variety of metal-containing cofactors at symmetric protein interfaces. © 2017 The Protein Society.

  17. Method of simulation of low dose rate for total dose effect in 0.18 {mu}m CMOS technology

    Energy Technology Data Exchange (ETDEWEB)

    He Baoping; Yao Zhibin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Wang Yuanming; Zhang Keying, E-mail: baopinghe@126.co [Northwest Institute of Nuclear Technology, Xi' an 710613 (China)

    2009-07-15

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 {mu}m CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 {sup 0}C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  18. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  19. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  20. Determination of dialysis dose: a clinical comparison of methods.

    Science.gov (United States)

    Ahrenholz, Peter; Taborsky, Petr; Bohling, Margot; Rawer, Peter; Ibrahim, Noureddin; Gajdos, Martin; Machek, Petr; Sagova, Michaela; Gruber, Hans; Moucka, Pavel; Rychlik, Ivan; Leimenstoll, Gerd; Vyskocil, Pavel; Toenne, Gunter; Possnickerova, Jindriska; Woggan, Joerg; Riegel, Werner; Schneider, Helmut; Wojke, Ralf

    2011-01-01

    Guidelines recommend regular measurements of the delivered hemodialysis dose Kt/V. Nowadays, automatic non-invasive online measurements are available as alternatives to the conventional method with blood sampling, laboratory analysis, and calculation. In a prospective clinical trial, three different methods determining dialysis dose were simultaneously applied: Kt/V(Dau) (conventional method with Daugirdas' formula), Kt/V(OCM) [online clearance measurement (OCM) with urea distribution volume V based on anthropometric estimate], and Kt/V(BCM) [OCM measurement with V measured by bioimpedance analysis (Body Composition Monitor)]. 1,076 hemodialysis patients were analyzed. The dialysis dose was measured as Kt/V(Dau) = 1.74 ± 0.45, Kt/V(OCM) = 1.47 ± 0.34, and Kt/V(BCM) = 1.65 ± 0.42. The difference between Kt/V(OCM) and Kt/V(BCM) was due to the difference between anthropometric estimated V(Watson) and measured V(BCM). Compared to Kt/V(Dau), Kt/V(OCM) was 15% lower and Kt/V(BCM) 5% lower. Kt/V(Dau) was incidentally prone to falsely high values due to operative errors, whereas in these cases OCM-based measurements Kt/V(OCM) and Kt/V(BCM) delivered realistic values. The automated OCM Kt/V(OCM) with anthropometric estimation of urea distribution volume was the easiest method to use, but Kt/V(BCM) with measured urea distribution volume was closer to the conventional method. Copyright © 2011 S. Karger AG, Basel.

  1. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  2. Intercomparison of the finite difference and nodal discrete ordinates and surface flux transport methods for a LWR pool-reactor benchmark problem in X-Y geometry

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Stepanek, J.; Wagner, M.R.

    1983-01-01

    The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included

  3. [Evaluation of methods to calculate dialysis dose in daily hemodialysis].

    Science.gov (United States)

    Maduell, F; Gutiérrez, E; Navarro, V; Torregrosa, E; Martínez, A; Rius, A

    2003-01-01

    Daily dialysis has shown excellent clinical results because a higher frequency of dialysis is more physiological. Different methods have been described to calculate dialysis dose which take into consideration change in frequency. The aim of this study was to calculate all dialysis dose possibilities and evaluate the better and practical options. Eight patients, 6 males and 2 females, on standard 4 to 5 hours thrice weekly on-line hemodiafiltration (S-OL-HDF) were switched to daily on-line hemodiafiltration (D-OL-HDF) 2 to 2.5 hours six times per week. Dialysis parameters were identical during both periods and only frequency and dialysis time of each session were changed. Time average concentration (TAC), time average deviation (TAD), normalized protein catabolic rate (nPCR), Kt/V, equilibrated Kt/V (eKt/V), equivalent renal urea clearance (EKR), standard Kt/V (stdKt/V), urea reduction ratio (URR), hemodialysis product and time off dialysis were measured. Daily on-line hemodiafiltration was well accepted and tolerated. Patients maintained the same TAC although TAD decreased from 9.7 +/- 2 in baseline to a 6.2 +/- 2 mg/dl after six months, p time off dialysis was reduced to half. Dialysis frequency is an important urea kinetic parameter which there are to take in consideration. It's necessary to use EKR, stdKt/V or weekly URR to calculate dialysis dose for an adequate comparison between different frequency dialysis schedules.

  4. Dosing method of physical activity in aerobics classes for students

    Directory of Open Access Journals (Sweden)

    Yu.I. Beliak

    2014-10-01

    Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.

  5. A new method for dosing uranium in biological media

    International Nuclear Information System (INIS)

    Henry, Ph.; Kobisch, Ch.

    1964-01-01

    This report describes a new method for dosing uranium in biological media based on measurement of alpha activity. After treatment of the sample with a mineral acid, the uranium is reduced to the valency four by trivalent titanium and is precipitated as phosphate in acid solution. The uranium is then separated from the titanium by precipitation as UF 4 with lanthanum as carrier. A slight modification, unnecessary in the case of routine analyses, makes it possible to eliminate other possible alpha emitters (thorium and transuranic elements). (authors) [fr

  6. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  7. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  8. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  9. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput

  10. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  11. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  12. Absorbed dose determination in photon fields using the tandem method

    International Nuclear Information System (INIS)

    Marques Pachas, J.F.

    1999-01-01

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with 90 Sr- 90 Y, calibrated with the energy of 60 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than 5%. The reason of the answers of the CaF 2 : Dy and LiF: Mg, Ti, according to the energy of the radiation, allows us to establish the effective energy of photons and the absorbed dose, with a margin of error of less than 10% and 20% respectively

  13. Experimental method research on neutron equal dose-equivalent detection

    International Nuclear Information System (INIS)

    Ji Changsong

    1995-10-01

    The design principles of neutron dose-equivalent meter for neutron biological equi-effect detection are studied. Two traditional principles 'absorption net principle' and 'multi-detector principle' are discussed, and on the basis of which a new theoretical principle for neutron biological equi-effect detection--'absorption stick principle' has been put forward to place high hope on both increasing neutron sensitivity of this type of meters and overcoming the shortages of the two traditional methods. In accordance with this new principle a brand-new model of neutron dose-equivalent meter BH3105 has been developed. Its neutron sensitivity reaches 10 cps/(μSv·h -1 ), 18∼40 times higher than that of all the same kinds of meters 0.23∼0.56 cps/(μSv·h -1 ), available today at home and abroad and the specifications of the newly developed meter reach or surpass the levels of the same kind of meters. Therefore the new theoretical principle of neutron biological equi-effect detection--'absorption stick principle' is proved to be scientific, advanced and useful by experiments. (3 refs., 3 figs., 2 tabs.)

  14. Repeated dose titration versus age-based method in electroconvulsive therapy: a pilot study

    NARCIS (Netherlands)

    Aten, J.J.; Oudega, M.L.; van Exel, E.; Stek, M.L.; van Waarde, J.A.

    2015-01-01

    In electroconvulsive therapy (ECT), a dose titration method (DTM) was suggested to be more individualized and therefore more accurate than formula-based dosing methods. A repeated DTM (every sixth session and dose adjustment accordingly) was compared to an age-based method (ABM) regarding treatment

  15. Risk and dose assessment methods in gamma knife QA

    International Nuclear Information System (INIS)

    Banks, W.W.; Jones, E.D.; Rathbun, P.

    1992-10-01

    Traditional methods used in assessing risk in nuclear power plants may be inappropriate to use in assessing medical radiation risks. The typical philosophy used in assessing nuclear reactor risks is machine dominated with only secondary attention paid to the human component, and only after critical machine failure events have been identified. In assessing the risk of a misadministrative radiation dose to patients, the primary source of failures seems to stem overwhelmingly, from the actions of people and only secondarily from machine mode failures. In essence, certain medical misadministrations are dominated by human events not machine failures. Radiological medical devices such as the Leksell Gamma Knife are very simple in design, have few moving parts, and are relatively free from the risks of wear when compared with a nuclear power plant. Since there are major technical differences between a gamma knife and a nuclear power plant, one must select a particular risk assessment method which is sensitive to these system differences and tailored to the unique medical aspects of the phenomena under study. These differences also generate major shifts in the philosophy and assumptions which drive the risk assessment (Machine-centered vs Person-centered) method. We were prompted by these basic differences to develop a person-centered approach to risk assessment which would reflect these basic philosophical and technological differences, have the necessary resolution in its metrics, and be highly reliable (repeatable). The risk approach chosen by the Livermore investigative team has been called the ''Relative Risk Profile Method'' and has been described in detail by Banks and Paramore, (1983)

  16. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  17. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  18. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    OpenAIRE

    Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the d...

  19. A phantom based method for deriving typical patient doses from measurements of dose-area product on populations of patients

    International Nuclear Information System (INIS)

    Chapple, C.-L.; Broadhead, D.A.

    1995-01-01

    One of the chief sources of uncertainty in the comparison of patient dosimetry data is the influence of patient size on dose. Dose has been shown to relate closely to the equivalent diameter of the patient. This concept has been used to derive a prospective, phantom based method for determining size correction factors for measurements of dose-area product. The derivation of the size correction factor has been demonstrated mathematically, and the appropriate factor determined for a number of different X-ray sets. The use of phantom measurements enables the effect of patient size to be isolated from other factors influencing patient dose. The derived factors agree well with those determined retrospectively from patient dose survey data. Size correction factors have been applied to the results of a large scale patient dose survey, and this approach has been compared with the method of selecting patients according to their weight. For large samples of data, mean dose-area product values are independent of the analysis method used. The chief advantage of using size correction factors is that it allows all patient data to be included in a survey, whereas patient selection has been shown to exclude approximately half of all patients. (author)

  20. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  1. Manual method for dose calculation in gynecologic brachytherapy; Metodo manual para o calculo de doses em braquiterapia ginecologica

    Energy Technology Data Exchange (ETDEWEB)

    Vianello, Elizabeth A.; Almeida, Carlos E. de [Instituto Nacional do Cancer, Rio de Janeiro, RJ (Brazil); Biaggio, Maria F. de [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    1998-09-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author) 10 refs., 5 figs.

  2. Multicentre evaluation of a novel vaginal dose reporting method in 153 cervical cancer patients

    NARCIS (Netherlands)

    Westerveld, Henrike; de Leeuw, Astrid; Kirchheiner, Kathrin; Dankulchai, Pittaya; Oosterveld, Bernard; Oinam, Arun; Hudej, Robert; Swamidas, Jamema; Lindegaard, Jacob; Tanderup, Kari; Pötter, Richard; Kirisits, Christian

    2016-01-01

    Recently, a vaginal dose reporting method for combined EBRT and BT in cervical cancer patients was proposed. The current study was to evaluate vaginal doses with this method in a multicentre setting, wherein different applicators, dose rates and protocols were used. In a subset of patients from the

  3. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  4. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  5. A "quality-control-based correction method" for displayed dose indices on CT scanner consoles in patient dose surveys.

    Science.gov (United States)

    Parsi, Masoumeh; Sohrabi, Mehdi; Mianji, Fereidoun; Paydar, Reza

    2017-06-01

    A new quality-control-based (QC-based) method is introduced to obtain correction factors to be applied to displayed patient dose indices (CTDI Vol and DLP) on CT scanner consoles to verify improvement of dose surveys for diagnostic reference levels (DRLs) determination. An available data-base of QC documents and reports of 57 CT scanners in Tehran, Iran was used to estimate CTDI Vol , DLP and relevant correction factors for three CT examination types including head, chest and abdomen/pelvis. The correction factor is the ratio of QC-based estimated dose to displayed dose. A dose survey was performed by applying on-site "data collection method" and correction factors obtained in order to select CT scanners in three modes for determination of CT DRLs by inclusion of: (a) all CT scanners before displayed dose indices were corrected (57), (b) only CT scanners calibrated by QC experts (41) and (c) all CT scanners after displayed dose indices were corrected (57). For the 41 CT scanners, correction factors of three examination types obtained in this study are within the acceptance tolerance of IAEA HHS-19. The correction factors range from 0.45 to 1.7 (average of 3 examinations) which is due to the change in the calibrated value of CTDI Vol over extended time. The DRL differences in three modes are within ±1.0% for CTDI Vol and ±12.4% for DLP. The "QC-based correction method" applied to mode (c) has improved the DRLs obtained by other two modes. This method is a strong alternative to "direct dose measurement" with simplicity and cost effectiveness. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  7. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  8. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  9. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling, E-mail: ling.zou@inl.gov; Zhao, Haihua; Zhang, Hongbin

    2016-04-15

    Highlights: • High-order spatial and fully implicit temporal numerical schemes in solving two-phase six-equation model. • Jacobian-free Newton–Krylov method was used to solve discretized nonlinear equations. • Realistic flow regimes and closure correlations were used. • Extensive code validation using experimental data, and benchmark with RELAP5-3D. - Abstract: This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3D code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. This in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.

  10. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  11. Multicentre evaluation of a novel vaginal dose reporting method in 153 cervical cancer patients

    DEFF Research Database (Denmark)

    Westerveld, Henrike; de Leeuw, Astrid; Kirchheiner, Kathrin

    2016-01-01

    Background and purpose Recently, a vaginal dose reporting method for combined EBRT and BT in cervical cancer patients was proposed. The current study was to evaluate vaginal doses with this method in a multicentre setting, wherein different applicators, dose rates and protocols were used. Material...... and methods In a subset of patients from the EMBRACE study, vaginal doses were evaluated. Doses at the applicator surface left/right and anterior/posterior and at 5 mm depth were measured. In addition, the dose at the Posterior–Inferior Border of Symphysis (PIBS) vaginal dose point and PIBS±2 cm......, corresponding to the mid and lower vagina, was measured. Results 153 patients from seven institutions were included. Large dose variations expressed in EQD2 with α/β = 3 Gy were seen between patients, in particular at the top left and right vaginal wall (median 195 (range 61–947) Gy/178 (61–980) Gy...

  12. Code intercomparison and benchmark for muon fluence and absorbed dose induced by an 18-GeV electron beam after massive iron shielding

    CERN Document Server

    Fassò, Alberto; Ferrari, Anna; Mokhov, Nikolai V.; Müller, Stefan E.; Nelson, Walter Ralph; Roesler, Stefan; Sanami, Toshiya; Striganov, Sergei I.; Versaci, Roberto

    2015-01-01

    In 1974, Nelson, Kase, and Svenson published an experimental investigation on muon shielding using the SLAC high energy LINAC. They measured muon fluence and absorbed dose induced by a 18 GeV electron beam hitting a copper/water beam dump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical mode ls available at the time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results will then be compared between the codes, and with the SLAC data.

  13. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    Energy Technology Data Exchange (ETDEWEB)

    Fasso, A. [SLAC; Ferrari, A. [CERN; Ferrari, A. [HZDR, Dresden; Mokhov, N. V. [Fermilab; Mueller, S. E. [HZDR, Dresden; Nelson, W. R. [SLAC; Roesler, S. [CERN; Sanami, t.; Striganov, S. I. [Fermilab; Versaci, R. [Unlisted, CZ

    2016-12-01

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, and with the SLAC data.

  14. Multicentre evaluation of a novel vaginal dose reporting method in 153 cervical cancer patients.

    Science.gov (United States)

    Westerveld, Henrike; de Leeuw, Astrid; Kirchheiner, Kathrin; Dankulchai, Pittaya; Oosterveld, Bernard; Oinam, Arun; Hudej, Robert; Swamidas, Jamema; Lindegaard, Jacob; Tanderup, Kari; Pötter, Richard; Kirisits, Christian

    2016-09-01

    Recently, a vaginal dose reporting method for combined EBRT and BT in cervical cancer patients was proposed. The current study was to evaluate vaginal doses with this method in a multicentre setting, wherein different applicators, dose rates and protocols were used. In a subset of patients from the EMBRACE study, vaginal doses were evaluated. Doses at the applicator surface left/right and anterior/posterior and at 5mm depth were measured. In addition, the dose at the Posterior-Inferior Border of Symphysis (PIBS) vaginal dose point and PIBS±2cm, corresponding to the mid and lower vagina, was measured. 153 patients from seven institutions were included. Large dose variations expressed in EQD2 with α/β=3Gy were seen between patients, in particular at the top left and right vaginal wall (median 195 (range 61-947)Gy/178 (61-980)Gy, respectively). At 5mm depth, doses were 98 (55-212)Gy/91 (54-227)Gy left/right, and 71 (51-145)Gy/67 (49-189)Gy anterior/posterior, respectively. The dose at PIBS and PIBS±2cm was 41 (3-81)Gy, 54 (32-109)Gy and 5 (1-51)Gy, respectively. At PIBS+2cm (mid vagina) dose variation was coming from BT. The variation at PIBS-2cm (lower vagina) was mainly dependent on EBRT field border location. This novel method for reporting vaginal doses coming from EBRT and BT through well-defined dose points gives a robust representation of the dose along the vaginal axis. In addition, it allows comparison of vaginal dose between patients from different centres. The doses at the PIBS points represent the doses at the mid and lower parts of the vagina. Large variations in dose throughout the vagina were observed between patients and centres. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  16. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  17. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  18. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  19. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    Energy Technology Data Exchange (ETDEWEB)

    Lowenstein, J; Nguyen, H; Roll, J; Walsh, A; Tailor, A; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on how to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.

  20. Dose analysis in Brjansk region during the restoration period of nuclear accident and effects of dose reduction methods in Chernobyl

    International Nuclear Information System (INIS)

    Ramzaev, V.; Kovalenko, V.; Krivonsov, S.

    1999-01-01

    The exposure pathways to the people in this area were analysed and some decontamination methods and techniques were explained. The spatial dose rate, whole-body dose and external exposure of four kinds of classes such as pensioner, jobless person, outdoor laborer, indoor laborer and child were measured. New whole-body counter used can decrease the effect of external dose on 661 keV γ-ray. The relation coefficient between the soil contamination level and the external exposure was 0.99, but that between the cesium 137 content in soil and the internal exposure was -0.2, showing no correlation. Main source of cesium 137 in body was milk from private cow in each village. The concentration of radioactive cesium of 40% milk samples were more than 370 Bq/l. More than 75% mushroom and strawberry showed 600 Bq/kg and over. Other foods indicated less cesium content than that of above foods. The decontamination methods of roof, garden, milk and improved manure of grass were carried out in Smajalch. The most effective method seemed to be the filtration of milk. Each method came into effect to reduce the average annual dose to 1 mSv until the next year. (S.Y.)

  1. Recommended environmental dose calculation methods and Hanford-specific parameters

    Energy Technology Data Exchange (ETDEWEB)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V. (Pacific Northwest Lab., Richland, WA (United States)); Davis, J.S. (Westinghouse Hanford Co., Richland, WA (United States))

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document.

  2. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  3. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  4. Different intensity extension methods and their impact on entrance dose in breast radiotherapy: A study

    Directory of Open Access Journals (Sweden)

    Sankar A

    2009-01-01

    Full Text Available In breast radiotherapy, skin flashing of treatment fields is important to account for intrafraction movements and setup errors. This study compares the two different intensity extension methods, namely, Virtual Bolus method and skin flash tool method, to provide skin flashing in intensity modulated treatment fields. The impact of these two different intensity extension methods on skin dose was studied by measuring the entrance dose of the treatment fields using semiconductor diode detectors. We found no significant difference in entrance dose due to different methods used for intensity extension. However, in the skin flash tool method, selection of appropriate parameters is important to get optimum fluence extension.

  5. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  6. A method for describing the doses delivered by transmission x-ray computed tomography

    International Nuclear Information System (INIS)

    Shope, T.B.; Gagne, R.M.; Johnson, G.C.

    1981-01-01

    A method for describing the absorbed dose delivered by x-ray transmission computed tomography (CT) is proposed which provides a means to characterize the dose resulting from CT procedures consisting of a series of adjacent scans. The dose descriptor chosen is the average dose at several locations in the imaged volume of the central scan of the series. It is shown that this average dose, as defined, for locations in the central scan of the series can be obtained from the integral of the dose profile perpendicular to the scan plane at these same locations for a single scan. This method for estimating the average dose from a CT procedure has been evaluated as a function of the number of scans in the multiple scan procedure and location in the dosimetry phantom using single scan dose profiles obtained from five different types of CT systems. For the higher dose regions in the phantoms, the multiple scan dose descriptor derived from the single scan dose profiles overestimates the multiple scan average dose by no more than 10%, provided the procedure consists of at least eight scans

  7. Comparison of different methods of calculating CT radiation effective dose in children.

    Science.gov (United States)

    Newman, Beverley; Ganguly, Arundhuti; Kim, Jee-Eun; Robinson, Terry

    2012-08-01

    CT radiation dose is a subject of intense interest and concern, especially in children. Effective dose, a summation of whole-body exposure weighted by specific organ sensitivities, is most often used to compute and compare radiation dose; however, there is little standardization, and there are numerous different methods of calculating effective dose. This study compares five such methods in a group of children undergoing routine chest CT and explores their advantages and pitfalls. Patient data from 120 pediatric chest CT examinations were retrospectively used to calculate effective dose: two scanner dose-length product (DLP) methods using published sets of conversion factors by Shrimpton and Deak, the imaging performance and assessment of CT (ImPact) calculator method, the Alessio online calculator, and the Huda method. The Huda method mean effective dose (4.4 ± 2.2 mSv) and Alessio online calculator (5.2 ± 2.8 mSv) yielded higher mean numbers for effective dose than both DLP calculations (Shrimpton, 3.65 ± 1.8 mSv, and Deak, 3.2 ± 1.5 mSv) as well as the ImPact calculator effective dose (3.4 ± 1.7 mSv). Mean differences ranged from 10.2% ± 10.1% lower to 28% ± 37.3% higher than the Shrimpton method (used as the standard for comparison). Differences were more marked at 120 kVp than at 80 or 100 kVp and varied at different ages. Concordance coefficients relative to the Shrimpton DLP method were Deak DLP, 0.907; Alessio online calculator, 0.735; ImPact calculator, 0.926; and Huda, 0.777. Different methods of computing effective dose for pediatric CT produce varying results. The method used must be clearly described to allay confusion about documenting and communicating dose for archiving as well as comparative research purposes.

  8. SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Geneser, S; Paulsson, A; Sneed, P; Braunstein, S; Ma, L [UCSF Comprehensive Cancer Center, San Francisco, CA (United States)

    2015-06-15

    Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to the thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.

  9. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    International Nuclear Information System (INIS)

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-01-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  10. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  11. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    International Nuclear Information System (INIS)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-01-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  12. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have...... not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web...

  13. A logistic dose-ranging method for phase I clinical investigations trials.

    Science.gov (United States)

    Murphy, J R; Hall, D L

    1997-11-01

    This paper describes an alternative to the continual reassessment method (CRM) for phase I trials. The logistic dose ranging strategy (LDRS) uses logistic regression and a dose allocation scheme similar to the CRM. It can easily be implemented from any logistic regression program. The LDRS can be a stand alone dose allocation scheme or it can be incorporated into standard three on a dose strategies to indicate when escalation can proceed more rapidly. Finally, the effect of covariates such as age or comorbid conditions on the toxicity expected for the dose selected for a phase II trial can be examined.

  14. A method to adjust radiation dose-response relationships for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane Lindegaard; Vogelius, Ivan R

    2012-01-01

    Several clinical risk factors for radiation induced toxicity have been identified in the literature. Here, we present a method to quantify the effect of clinical risk factors on radiation dose-response curves and apply the method to adjust the dose-response for radiation pneumonitis for patients...

  15. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    Science.gov (United States)

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise

  16. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  17. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  18. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  19. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  20. The MSRC Ab Initio Methods Benchmark Suite: A measurement of hardware and software performance in the area of electronic structure methods

    Energy Technology Data Exchange (ETDEWEB)

    Feller, D.F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory`s Molecular Science Research Center in late 1992 and early 1993. The ``snapshot`` nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  1. Dose calculation using a numerical method based on Haar wavelets integration

    Energy Technology Data Exchange (ETDEWEB)

    Belkadhi, K., E-mail: khaled.belkadhi@ult-tunisie.com [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); Manai, K. [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); College of Science and Arts, University of Bisha, Bisha (Saudi Arabia)

    2016-03-11

    This paper deals with the calculation of the absorbed dose in an irradiation cell of gamma rays. Direct measurement and simulation have shown that they are expensive and time consuming. An alternative to these two operations is numerical methods, a quick and efficient way can furnish an estimation of the absorbed dose by giving an approximation of the photon flux at a specific point of space. To validate the numerical integration method based on the Haar wavelet for absorbed dose estimation, a study with many configurations was performed. The obtained results with the Haar wavelet method showed a very good agreement with the simulation highlighting good efficacy and acceptable accuracy. - Highlights: • A numerical integration method using Haar wavelets is detailed. • Absorbed dose is estimated with Haar wavelets method. • Calculated absorbed dose using Haar wavelets and Monte Carlo simulation using Geant4 are compared.

  2. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  3. Evaluation of radiation dose to patients in intraoral dental radiography using Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Il; Kim, Kyeong Ho; Oh, Seung Chul; Song, Ji Young [Dept. of Nuclear Engineering, Kyung Hee University, Yongin (Korea, Republic of)

    2016-11-15

    The use of dental radiographic examinations is common although radiation dose resulting from the dental radiography is relatively small. Therefore, it is required to evaluate radiation dose from the dental radiography for radiation safety purpose. The objectives of the present study were to develop dosimetry method for intraoral dental radiography using a Monte Carlo method based radiation transport code and to calculate organ doses and effective doses of patients from different types of intraoral radiographies. Radiological properties of dental radiography equipment were characterized for the evaluation of patient radiation dose. The properties including x-ray energy spectrum were simulated using MCNP code. Organ doses and effective doses to patients were calculated by MCNP simulation with computational adult phantoms. At the typical equipment settings (60 kVp, 7 mA, and 0.12 sec), the entrance air kerma was 1.79 mGy and the measured half value layer was 1.82 mm. The half value layer calculated by MCNP simulation was well agreed with the measurement values. Effective doses from intraoral radiographies ranged from 1 μSv for maxilla premolar to 3 μSv for maxilla incisor. Oral cavity layer (23⁓82 μSv) and salivary glands (10⁓68 μSv) received relatively high radiation dose. Thyroid also received high radiation dose (3⁓47 μSv) for examinations. The developed dosimetry method and evaluated radiation doses in this study can be utilized for policy making, patient dose management, and development of low-dose equipment. In addition, this study can ultimately contribute to decrease radiation dose to patients for radiation safety.

  4. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    Science.gov (United States)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  5. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    International Nuclear Information System (INIS)

    Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp; Behler, Jörg

    2016-01-01

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.

  6. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    Caracappa, Peter F.; Xu, X. George; Gu, Jianwei

    2011-01-01

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  7. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  8. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Irie, Takashi; Kohriyama, Tamio; Kudo, Seiichi; Nishimura, Kazuya

    2001-01-01

    When we assume a severe accident in a nuclear power plant, it is required for rescue activity in the plant, accident management, repair work of failed parts and evaluation of employees to obtain radiation dose rate distribution or map in the plant and estimated dose value for the above works. However it might be difficult to obtain them accurately along the progress of the accident, because radiation monitors are not always installed in the areas where the accident management is planned or the repair work is thought for safety-related equipments. In this work, we analyzed diffusion of radioactive materials in case of a severe accident in a pressurized water reactor plant, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system by modeling a specific part of components and buildings in the plant from this design study on dose evaluation method for employees at severe accident, and then evaluated its availability. As a result, we obtained the followings: (1) A new dose evaluation method was established to predict the radiation dose rate in any point in the plant during a severe accident scenario. (2) This evaluation of total dose including moving route and time for the accident management and the repair work is useful for estimating radiation dose limit for these actions of the employees. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  9. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  10. Calculation of dose conversion coefficients for the radionuclides in soil using the Monte Carlo method

    International Nuclear Information System (INIS)

    Balos, Y.; Timurtuerkan, E. B.; Yorulmaz, N.; Bozkurt, A.

    2009-01-01

    In determining the radiation background of a region, it is important to carry out environmental radioactivity measurements in soil, water and air, to determine their contribution to the dose rate in air. This study aims to determine the dose conversion coefficients (in {nGy/h}/{Bq/kg}) that are used to convert radionuclide activity concentration in soil (in Bq/kg) to dose rate in air (in nGy/h) using the Monte Carlo method. An isotropic source which emits monoenergetic photons is assumed to be uniformly distributed in soil. The doses released by photons in organs and tissues of a mathematical phantom are determined by the Monte Carlo package MCNP. The organ doses are then used, together with radiation weighting factors and organ weighting factors, to obtain effective doses for the energy range of 100 keV-3 MeV, which in turn are used to determine the dose rates in air per unit of specific activity.

  11. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods

    DEFF Research Database (Denmark)

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik

    2017-01-01

    for sequencing. The result is a data set consisting of 101 whole genome sequences with known phylogenetic relationship. Among the sequenced samples 51 correspond to internal nodes in the phylogeny because they are ancestral, while the remaining 50 correspond to leaves.We also used the newly created data set...... sequences are placed as leafs, even though some of them are in fact ancestral. We therefore devised a method for post processing the inferred trees by collapsing short branches (thus relocating some leafs to internal nodes), and also present two new measures of tree similarity that takes into account...... the identity of both internal and leaf nodes. Conclusions Based on this analysis we find that, among the investigated methods, CSI Phylogeny had the best performance, correctly identifying 73% of all branches in the tree and 71% of all clades.We have made all data from this experiment (raw sequencing reads...

  12. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  13. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    Science.gov (United States)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  14. X-ray tube output based calculation of patient entrance surface dose: validation of the method

    Energy Technology Data Exchange (ETDEWEB)

    Harju, O.; Toivonen, M.; Tapiovaara, M.; Parviainen, T. [Radiation and Nuclear Safety Authority, Helsinki (Finland)

    2003-06-01

    X-ray departments need methods to monitor the doses delivered to the patients in order to be able to compare their dose level to established reference levels. For this purpose, patient dose per radiograph is described in terms of the entrance surface dose (ESD) or dose-area product (DAP). The actual measurement is often made by using a DAP-meter or thermoluminescent dosimeters (TLD). The third possibility, the calculation of ESD from the examination technique factors, is likely to be a common method for x-ray departments that do not have the other methods at their disposal or for examinations where the dose may be too low to be measured by the other means (e.g. chest radiography). We have developed a program for the determination of ESD by the calculation method and analysed the accuracy that can be achieved by this indirect method. The program calculates the ESD from the current time product, x-ray tube voltage, beam filtration and focus- to-skin distance (FSD). Additionally, for calibrating the dose calculation method and thereby improving the accuracy of the calculation, the x-ray tube output should be measured for at least one x-ray tube voltage value in each x-ray unit. The aim of the present work is to point out the restrictions of the method and details of its practical application. The first experiences from the use of the method will be summarised. (orig.)

  15. Benchmarking Quantum Mechanics/Molecular Mechanics (QM/MM) Methods on the Thymidylate Synthase-Catalyzed Hydride Transfer.

    Science.gov (United States)

    Świderek, Katarzyna; Arafet, Kemel; Kohen, Amnon; Moliner, Vicent

    2017-03-14

    Given the ubiquity of hydride-transfer reactions in enzyme-catalyzed processes, identifying the appropriate computational method for evaluating such biological reactions is crucial to perform theoretical studies of these processes. In this paper, the hydride-transfer step catalyzed by thymidylate synthase (TSase) is studied by examining hybrid quantum mechanics/molecular mechanics (QM/MM) potentials via multiple semiempirical methods and the M06-2X hybrid density functional. Calculations of protium and tritium transfer in these reactions across a range of temperatures allowed calculation of the temperature dependence of kinetic isotope effects (KIE). Dynamics and quantum-tunneling effects are revealed to have little effect on the reaction rate, but are significant in determining the KIEs and their temperature dependence. A good agreement with experiments is found, especially when computed for RM1/MM simulations. The small temperature dependence of quantum tunneling corrections and the quasiclassical contribution term cancel each other, while the recrossing transmission coefficient seems to be temperature-independent over the interval of 5-40 °C.

  16. A new method for dosing rhodamine B in natural water

    International Nuclear Information System (INIS)

    Marichal, M.; Benoit, R.

    1961-01-01

    A simple and sensitive method well adapted to hydrological research. The dye is first extracted from the water sample by isoamyl alcohol and then the fluorescence of the alcoholic solution, after excitation by ultraviolet radiation, is measured spectrophotometrically. The sensitivity of the method is about 10 -12 , that is, a millionth of a milligram of dye per litre. Reprint of a paper published in 'Chimie Analytique', N. 2, Feb 1962, p. 70-72 [fr

  17. Evaluation of a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy

    International Nuclear Information System (INIS)

    Imae, Toshikazu; Takenaka, Shigeharu; Saotome, Naoya

    2016-01-01

    The purpose of this study was to evaluate a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy (SBRT) using volumetric modulated arc therapy (VMAT). VMAT is capable of acquiring respiratory signals derived from projection images and machine parameters based on machine logs during VMAT delivery. Dose distributions were reconstructed from the respiratory signals and machine parameters in the condition where respiratory signals were without division, divided into 4 and 10 phases. The dose distribution of each respiratory phase was calculated on the planned four-dimensional CT (4DCT). Summation of the dose distributions was carried out using deformable image registration (DIR), and cumulative dose distributions were compared with those of the corresponding plans. Without division, dose differences between cumulative distribution and plan were not significant. In the condition Where respiratory signals were divided, dose differences were observed over dose in cranial region and under dose in caudal region of planning target volume (PTV). Differences between 4 and 10 phases were not significant. The present method Was feasible for evaluating cumulative dose distribution in VMAT-SBRT using 4DCT and DIR. (author)

  18. A comprehensive method for calculating patient effective dose and other dosimetric quantities from CT DICOM images.

    Science.gov (United States)

    Tsalafoutas, Ioannis A; Thalassinou, Stella; Efstathopoulos, Efstathios P

    2012-07-01

    The purpose of this article is to present a method for the calculation of effective dose using the DICOM header information of CT images. Using specialized software, the DICOM data were automatically extracted into a spreadsheet containing embedded functions for calculating effective dose. These data were used to calculate the dose-length product (DLP) fraction that corresponds to each image, and the respective effective dose was obtained by multiplying the image DLP by a conversion coefficient that was automatically selected depending on the CT scanner, the tube potential, and the anatomic position to which each image corresponded. The total effective dose was calculated as the sum of effective doses of all images plus the contribution of overscan. The conversion coefficient tables were derived using dosimetry calculator software for both the International Commission on Radiological Protection (ICRP) 60 and ICRP 103 organ-weighting schemes. This method was applied for 90 chest, abdomen-pelvis, and chest-abdomen-pelvis examinations performed in three different MDCT scanners. The DLP values calculated with this method were in good agreement with those calculated by the CT scanners' software. The effective dose values calculated using the ICRP 103 conversion coefficient compared with those calculated using the ICRP 60 conversion coefficient were roughly equal for the chest-abdomen-pelvis examinations, smaller for the abdomen-pelvis examinations, and larger for the chest examinations. The applicability of this method for estimating organ doses was also explored. With this method, all patient dose-related quantities, such as the DLP, effective dose, and individual organ doses, can be calculated.

  19. Benchmarking sample preparation/digestion protocols reveals tube-gel being a fast and repeatable method for quantitative proteomics.

    Science.gov (United States)

    Muller, Leslie; Fornecker, Luc; Van Dorsselaer, Alain; Cianférani, Sarah; Carapito, Christine

    2016-12-01

    Sample preparation, typically by in-solution or in-gel approaches, has a strong influence on the accuracy and robustness of quantitative proteomics workflows. The major benefit of in-gel procedures is their compatibility with detergents (such as SDS) for protein solubilization. However, SDS-PAGE is a time-consuming approach. Tube-gel (TG) preparation circumvents this drawback as it involves directly trapping the sample in a polyacrylamide gel matrix without electrophoresis. We report here the first global label-free quantitative comparison between TG, stacking gel (SG), and basic liquid digestion (LD). A series of UPS1 standard mixtures (at 0.5, 1, 2.5, 5, 10, and 25 fmol) were spiked in a complex yeast lysate background. TG preparation allowed more yeast proteins to be identified than did the SG and LD approaches, with mean numbers of 1979, 1788, and 1323 proteins identified, respectively. Furthermore, the TG method proved equivalent to SG and superior to LD in terms of the repeatability of the subsequent experiments, with mean CV for yeast protein label-free quantifications of 7, 9, and 10%. Finally, known variant UPS1 proteins were successfully detected in the TG-prepared sample within a complex background with high sensitivity. All the data from this study are accessible on ProteomeXchange (PXD003841). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  1. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Carter, L.L.

    1980-01-01

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  2. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  3. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  4. Developing a novel technique for absolute measurements of the principal- and second-shock Hugoniots: a benchmark for the impedance-match methods

    Science.gov (United States)

    Gu, Yunjun; Zheng, Jun; Chen, Qifeng; Li, Chengjun; Li, Jiangtao; Chen, Zhiyun

    2017-06-01

    A novel diagnostics configuration was presented for performing the absolute measurements of the principal- and second-shock Hugoniots of the dense gaseous H2+D2 mixtures under multi-shock compression and probing their thermodynamic properties by a joint diagnostics of multi-channel optical pyrometer (MCOP), Doppler Pin System (DPS), and streak camera. This technique allowed the time-resolved optical radiation histories, interface velocity profiles, and time-resolved spectrum of the multi-compressed sample to be simultaneously measured in a single shot. The shock wave velocities and particle velocities under the former two shock compressions can be directly determined with the help of the above multiple detects instead of the impedance-match methods. So, absolute measurements of the principal- and second-shock Hugoniots for pre-compressed dense gaseous H2+D2 mixtures under multi-shock compression can be achieved, which provides a benchmark for the impedance-match measurement technique. Furthermore, the combination of multiple diagnostics also allows different experimental observables to be cross-checked, which reinforces the reliability of the experimental measurements.

  5. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  6. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Irie, Takashi; Kohriyama, Tamio; Kudo, Seiichi; Nishimura, Kazuya

    2002-01-01

    If a severe accident occurs in a pressurized water reactor plant, it is required to estimate dose values of operators engaged in emergency such as accident management, repair of failed parts. However, it might be difficult to measure radiation dose rate during the progress of an accident, because radiation monitors are not always installed in areas where the emergency activities are required. In this study, we analyzed the transport of radioactive materials in case of a severe accident, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system from this design study, and then evaluated its availability. As a result, we obtained the following: (1) A new dose evaluation method was established to predict the radiation dose rate at any point in the plant during a severe accident scenario. (2) This evaluation of total dose including access route and time for emergency activities is useful for estimating radiation dose limit for these employee actions. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  7. Maximum skin dose assessment in interventional cardiology: large area detectors and calculation methods

    International Nuclear Information System (INIS)

    Quail, E.; Petersol, A.

    2002-01-01

    Advances in imaging technology have facilitated the development of increasingly complex radiological procedures for interventional radiology. Such interventional procedures can involve significant patient exposure, although often represent alternatives to more hazardous surgery or are the sole method for treatment. Interventional radiology is already an established part of mainstream medicine and is likely to expand further with the continuing development and adoption of new procedures. Between all medical exposures, interventional radiology is first of the list of the more expansive radiological practice in terms of effective dose per examination with a mean value of 20 mSv. Currently interventional radiology contribute 4% to the annual collective dose, in spite of contributing to total annual frequency only 0.3% but considering the perspectives of this method can be expected a large expansion of this value. In IR procedures the potential for deterministic effects on the skin is a risk to be taken into account together with stochastic long term risk. Indeed, the International Commission on Radiological Protection (ICRP) in its publication No 85, affirms that the patient dose of priority concern is the absorbed dose in the area of skin that receives the maximum dose during an interventional procedure. For the mentioned reasons, in IR it is important to give to practitioners information on the dose received by the skin of the patient during the procedure. In this paper maximum local skin dose (MSD) is called the absorbed dose in the area of skin receiving the maximum dose during an interventional procedure

  8. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  9. Imaging method for monitoring delivery of high dose rate brachytherapy

    Science.gov (United States)

    Weisenberger, Andrew G; Majewski, Stanislaw

    2012-10-23

    A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.

  10. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  11. Syringe-feeding as a novel delivery method for accurate individual dosing of probiotics in rats

    DEFF Research Database (Denmark)

    Tillmann, Sandra; Wegener, Gregers

    2017-01-01

    Probiotic administration to rodents is typically achieved using oral gavage or water bottles, but both approaches may compromise animal welfare, bacterial viability, dosing accuracy, or ease of administration. Oral gavage dosing may induce stress, especially when given daily over several weeks...... leftovers or clogging of the bottle further threaten the reliability of this method. To date, no method has been described that can provide non-stressful precise dosing of probiotics or prebiotics in individual rats. In accordance with the 3R principles (replace, reduce, refine), we propose syringe......-feeding as a refinement method for simple yet accurate administration of probiotics. Animals hereby voluntarily consume the solution directly from a syringe held into their home cage, thereby enabling controlled dosing of individual animals. This method requires a short training phase of approximately 3 days, but is very...

  12. Noninvasive non Doses Method for Risk Stratification of Breast Diseases

    Directory of Open Access Journals (Sweden)

    I. A. Apollonova

    2014-01-01

    Full Text Available The article concerns a relevant issue that is a development of noninvasive method for screening diagnostics and risk stratification of breast diseases. The developed method and its embodiment use both the analysis of onco-epidemiologic tests and the iridoglyphical research.Widely used onco-epidemiologic tests only reflect the patient’s subjective perception of her own life history and sickness. Therefore to confirm the revealed factors, modern objective and safe methods are necessary.Iridoglyphical research may be considered as one of those methods, since it allows us to reveal changes in iris’ zones in real time. As these zones are functionally linked with intern organs and systems, in this case mammary glands, changes of iris’ zones may be used to assess risk groups for mammary gland disorders.The article presents results of research conducted using a prototype of the hardwaresoftware complex to provide screening diagnostics and risk stratification of mammary gland disorders.Research has been conducted using verified materials, provided by the Biomedical Engineering Faculty and the Scientific Biometry Research and Development Centre of Bauman Moscow State Technical University, the City of Moscow’s GUZ Clinical and Diagnostic Centre N°4 of the Western Administrative District and the First Mammology (Breast Care Centre of the Russian Federation’s Ministry of Health and Social Development.The information, obtained as a result of onco-epidemiological tests and iridoglyphical research, was used to develop a procedure of quantitative diagnostics aimed to assess mammary gland cancer risk groups. The procedure is based on Bayes conditional probability.The task of quantitative diagnostics may be formally divided into the differential assessment of three states. The first, D1, is the norm, which corresponds to the population group with a lack of risk factors or changes of the mammary glands. The second, D2, is the population group

  13. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    International Nuclear Information System (INIS)

    Borges, Lucas R.; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C.; Bakic, Predrag R.; Maidment, Andrew D. A.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  14. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  15. Digital radiography of scoliosis with a scanning method: radiation dose optimization

    Energy Technology Data Exchange (ETDEWEB)

    Geijer, Haakan; Andersson, Torbjoern [Department of Radiology, Oerebro University Hospital, 701 85 Oerebro (Sweden); Verdonck, Bert [Philips Medical Systems, P.O. Box 10,000, 5680 Best (Netherlands); Beckman, Karl-Wilhelm; Persliden, Jan [Department of Medical Physics, Oerebro University Hospital, 701 85 Oerebro (Sweden)

    2003-03-01

    The aim of this study was optimization of the radiation dose-image quality relationship for a digital scanning method of scoliosis radiography. The examination is performed as a digital multi-image translation scan that is reconstructed to a single image in a workstation. Entrance dose was recorded with thermoluminescent dosimeters placed dorsally on an Alderson phantom. At the same time, kerma area product (KAP) values were recorded. A Monte Carlo calculation of effective dose was also made. Image quality was evaluated with a contrast-detail phantom and Visual Grading. The radiation dose was reduced by lowering the image intensifier entrance dose request, adjusting pulse frequency and scan speed, and by raising tube voltage. The calculated effective dose was reduced from 0.15 to 0.05 mSv with reduction of KAP from 1.07 to 0.25 Gy cm{sup 2} and entrance dose from 0.90 to 0.21 mGy. The image quality was reduced with the Image Quality Figure going from 52 to 62 and a corresponding reduction in image quality as assessed with Visual Grading. The optimization resulted in a dose reduction to 31% of the original effective dose with an acceptable reduction in image quality considering the intended use of the images for angle measurements. (orig.)

  16. Real-time dose compensation methods for scanned ion beam therapy of moving tumors

    International Nuclear Information System (INIS)

    Luechtenborg, Robert

    2012-01-01

    Scanned ion beam therapy provides highly tumor-conformal treatments. So far, only tumors showing no considerable motion during therapy have been treated as tumor motion and dynamic beam delivery interfere, causing dose deteriorations. One proposed technique to mitigate these deteriorations is beam tracking (BT), which adapts the beam position to the moving tumor. Despite application of BT, dose deviations can occur in the case of non-translational motion. In this work, real-time dose compensation combined with beam tracking (RDBT) has been implemented into the control system to compensate these dose changes by adaptation of nominal particle numbers during irradiation. Compared to BT, significantly reduced dose deviations were measured using RDBT. Treatment planning studies for lung cancer patients including the increased biological effectiveness of ions revealed a significantly reduced over-dose level (3/5 patients) as well as significantly improved dose homogeneity (4/5 patients) for RDBT. Based on these findings, real-time dose compensated re-scanning (RDRS) has been proposed that potentially supersedes the technically complex fast energy adaptation necessary for BT and RDBT. Significantly improved conformity compared to re-scanning, i.e., averaging of dose deviations by repeated irradiation, was measured in film irradiations. Simulations comparing RDRS to BT revealed reduced under- and overdoses of the former method.

  17. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  18. A simple method to back-project isocenter dose of radiotherapy treatments using EPID transit dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, T.B.; Cerbaro, B.Q.; Rosa, L.A.R. da, E-mail: thiago.fisimed@gmail.com, E-mail: tbsilveira@inca.gov.br [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro - RJ (Brazil)

    2017-07-01

    The aim of this work was to implement a simple algorithm to evaluate isocenter dose in a phantom using the back-projected transmitted dose acquired using an Electronic Portal Imaging Device (EPID) available in a Varian Trilogy accelerator with two nominal 6 and 10 MV photon beams. This algorithm was developed in MATLAB language, to calibrate EPID measured dose in absolute dose, using a deconvolution process, and to incorporate all scattering and attenuation contributions due to photon interactions with phantom. Modeling process was simplified by using empirical curve adjustments to describe the contribution of scattering and attenuation effects. The implemented algorithm and method were validated employing 19 patient treatment plans with 104 clinical irradiation fields projected on the phantom used. Results for EPID absolute dose calibration by deconvolution have showed percent deviations lower than 1%. Final method validation presented average percent deviations between isocenter doses calculated by back-projection and isocenter doses determined with ionization chamber of 1,86% (SD of 1,00%) and -0,94% (SD of 0,61%) for 6 and 10 MV, respectively. Normalized field by field analysis showed deviations smaller than 2% for 89% of all data for 6 MV beams and 94% for 10 MV beams. It was concluded that the proposed algorithm possesses sufficient accuracy to be used for in vivo dosimetry, being sensitive to detect dose delivery errors bigger than 3-4% for conformal and intensity modulated radiation therapy techniques. (author)

  19. Lens exposure during brain scans using multidetector row CT scanners: methods for estimation of lens dose.

    Science.gov (United States)

    Suzuki, S; Furui, S; Ishitake, T; Abe, T; Machida, H; Takei, R; Ibukuro, K; Watanabe, A; Kidouchi, T; Nakano, Y

    2010-05-01

    Some recent studies on radiation lens injuries have indicated much lower dose thresholds than specified by the current radiation protection guidelines. The purpose of this research was to measure the lens dose during brain CT scans with multidetector row CT and to assess methods for estimating the lens dose. With 8 types of multidetector row CT scanners, both axial and helical scans were obtained for the head part of a human-shaped phantom by using normal clinical settings with the orbitomeatal line as the baseline. We measured the doses on both eyelids by using an RPLGD during whole-brain scans including the orbit with the starting point at the level of the inferior orbital rim. To assess the effect of the starting points on the lens doses, we measured the lens doses by using 2 other starting points for scanning (the orbitomeatal line and the superior orbital rim). The CTDIvols and the lens doses during whole-brain CT including the orbit were 50.9-113.3 mGy and 42.6-103.5 mGy, respectively. The ratios of lens dose to CTDIvol were 80.6%-103.4%. The lens doses decreased as the starting points were set more superiorly. The lens doses during scans from the superior orbital rim were 11.8%-20.9% of the doses during the scans from the inferior orbital rim. CTDIvol can be used to estimate the lens dose during whole-brain CT when the orbit is included in the scanning range.

  20. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  1. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  2. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  3. Radiation Organ Doses Received by U.S. Radiologic Technologists: Estimation Methods and Findings.

    Science.gov (United States)

    Simon, Steven L; Weinstock, Robert M; Doody, Michele Morin; Preston, Dale L; Kwon, Deukwoo; Alexander, Bruce H; Miller, Jeremy S; Yoder, R Craig; Bhatti, Parveen; Sigurdson, Alice J; Linet, Martha S

    2010-05-17

    Abstract In this paper, we describe recent methodological enhancements and findings from the dose reconstruction component of a study of cancer risks among U.S. radiologic technologists. An earlier version of the dosimetry published in 2006 (Simon et al., Radiat. Res. 166, 174-192, 2006) used physical and statistical models, literature-reported exposure measurements for the years before 1960, and archival personnel monitoring badge data from cohort members through 1984. The data and models were used to estimate unknown occupational radiation doses for 90,000 radiological technologists, incorporating information about each individual's employment practices based on a survey conducted in the mid-1980s. The dosimetry methods presented here, while using many of the same methods as before, now estimate annual and cumulative occupational badge doses (personal dose equivalent) to about 110,000 technologists for each year worked from 1916 to 2006, but with numerous methodological improvements. This dosimetry, using much more comprehensive information on individual use of protection aprons, estimates radiation absorbed doses to 12 organs and tissues (red bone marrow, ovary, colon, brain, lung, heart, female breast, skin of trunk, skin of head and neck and arms, testes, thyroid and lens of the eye). Every technologist's annual dose is estimated as a probability density function (pdf) to account for shared and unshared uncertainties. Major improvements in the dosimetry methods include a substantial increase in the number of cohort member annual badge dose measurements, additional information on individual apron use obtained from surveys conducted in the 1990s and 2005, refined modeling to develop annual badge dose pdfs using Tobit regression, refinements of cohort-based annual badge pdfs to delineate exposures of highly and minimally exposed individuals and to assess minimal detectable limits more accurately, and extensive refinements in organ dose conversion coefficients to

  4. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  5. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  6. Independent dose verification method using TG43 parameters for HDR brachytherapy

    International Nuclear Information System (INIS)

    Kumar, Rajesh; Sharma, S.D.; Kannan, S.; Vijaykumar, C.

    2007-01-01

    High-dose-rate (HDR) brachytherapy has been proven as an effective treatment in the definitive management of different type of cancer. In this mode of therapy almost all the treatment unit uses a single stepping 192 Ir source which steps through precalculated treatment positions. As manual calculations of dose distribution and hence the treatment time is labourious. Computerized treatment planning systems (TPS) is used to derive the dose distribution. Though TPS facilitates the determination of the dose optimization and treatment time calculation, it is challenging to verify the TPS calculated dose. The importance of independently verifying the dosimetry prior to the treatment delivery has been recognized in the various works worldwide and is a requirement of various regulatory agencies. We describe here an independent method used for the sole purpose of verifying HDR dosimetry based on the AAPM TG43 formalism

  7. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States); Ghadyani, HR [SUNY Farmingdale State College, Farmingdale, NY (United States); Bastien, AD; Lutz, NN [Univeristy Massachusetts Lowell, Lowell, MA (United States); Hepel, JT [Rhode Island Hospital, Providence, RI (United States)

    2015-06-15

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  8. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    International Nuclear Information System (INIS)

    Rivard, MJ; Ghadyani, HR; Bastien, AD; Lutz, NN; Hepel, JT

    2015-01-01

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  9. Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy; Benchmark-Experiment zur Verifikation von Strahlungstransportrechnungen fuer die Dosimetrie in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)

    2016-11-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  10. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  11. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  12. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  13. Dose computation in conformal radiation therapy including geometric uncertainties: Methods and clinical implications

    Science.gov (United States)

    Rosu, Mihaela

    The aim of any radiotherapy is to tailor the tumoricidal radiation dose to the target volume and to deliver as little radiation dose as possible to all other normal tissues. However, the motion and deformation induced in human tissue by ventilatory motion is a major issue, as standard practice usually uses only one computed tomography (CT) scan (and hence one instance of the patient's anatomy) for treatment planning. The interfraction movement that occurs due to physiological processes over time scales shorter than the delivery of one treatment fraction leads to differences between the planned and delivered dose distributions. Due to the influence of these differences on tumors and normal tissues, the tumor control probabilities and normal tissue complication probabilities are likely to be impacted upon in the face of organ motion. In this thesis we apply several methods to compute dose distributions that include the effects of the treatment geometric uncertainties by using the time-varying anatomical information as an alternative to the conventional Planning Target Volume (PTV) approach. The proposed methods depend on the model used to describe the patient's anatomy. The dose and fluence convolution approaches for rigid organ motion are discussed first, with application to liver tumors and the rigid component of the lung tumor movements. For non-rigid behavior a dose reconstruction method that allows the accumulation of the dose to the deforming anatomy is introduced, and applied for lung tumor treatments. Furthermore, we apply the cumulative dose approach to investigate how much information regarding the deforming patient anatomy is needed at the time of treatment planning for tumors located in thorax. The results are evaluated from a clinical perspective. All dose calculations are performed using a Monte Carlo based algorithm to ensure more realistic and more accurate handling of tissue heterogeneities---of particular importance in lung cancer treatment planning.

  14. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Da; Li, Xinhua; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Gao, Yiming; Xu, X. George [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)

    2013-08-15

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated

  15. Dose comparison using deformed image registration method on breast cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Won; Kim, Jung Hoon [Dept. of Radiation Oncology, KonYang University Hospital, Daejeon (Korea, Republic of); Won, Young Jin [Dept. of Radiation Oncology, InJe University Ilsan Paik Hospital, Goyang (Korea, Republic of)

    2017-03-15

    The purpose of this study is to reconstruct the treatment plan by applying CBCT and DIR to dose changes according to the change of the patient's motion and breast shape in the large breast cancer patients and to compare the doses using TWF, FIF and IMRT. CT and CBCT were performed with MIM6 to create DIRCT and each treatment plan was made. The patient underwent computed tomography simulation in both prone and supine position. The homogeneity index (HI), conformity index (CI), coverage index (CVI) to the left breast as planning target volume (PTV) were determined and the doses to the lung, heart, and right breast as organ at risk (OAR) were compared by using dose-volume histogram and the unique property of each organ. The value of HI of the PTV breast increased in all treatment planning methods using DIRCT, and CVI and CI were decreased in the treatment planning methods using DIRCT.

  16. ARN Training Course on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Rojo, A.M.; Gomez Parada, I.; Puerta Yepes, N.; Gossio, S.

    2010-01-01

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. (authors)

  17. Benchmark assessment of density functional methods on group II-VI MX (M = Zn, Cd; X = S, Se, Te) quantum dots

    NARCIS (Netherlands)

    Azpiroz, Jon M.; Ugalde, Jesus M.; Infante, Ivan

    2014-01-01

    In this work, we build a benchmark data set of geometrical parameters, vibrational normal modes, and low-lying excitation energies for MX quantum dots, with M = Cd, Zn, and X = S, Se, Te. The reference database has been constructed by ab initio resolution-of-identity second-order approximate coupled

  18. A simple method for conversion of airborne gamma-ray spectra to ground level doses

    DEFF Research Database (Denmark)

    Korsbech, Uffe C C; Bargholz, Kim

    1996-01-01

    A new and simple method for conversion of airborne NaI(Tl) gamma-ray spectra to dose rates at ground level has been developed. By weighting the channel count rates with the channel numbers a spectrum dose index (SDI) is calculated for each spectrum. Ground level dose rates then are determined...... by multiplying the SDI by an altitude dependent conversion factor. The conversion factors are determined from spectra based on Monte Carlo calculations. The results are compared with measurements in a laboratory calibration set-up. IT-NT-27. June 1996. 27 p....

  19. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  20. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  1. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    OpenAIRE

    Zhang, Da; Li, Xinhua; Gao, Yiming; Xu, X. George; Liu, Bob

    2013-01-01

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.

  2. Jaws calibration method to get a homogeneous distribution of dose in the junction of hemi fields

    International Nuclear Information System (INIS)

    Cenizo de Castro, E.; Garcia Pareja, S.; Moreno Saiz, C.; Hernandez Rodriguez, R.; Bodineau Gil, C.; Martin-Viera Cueto, J. A.

    2011-01-01

    Hemi fields treatments are widely used in radiotherapy. Because the tolerance established for the positioning of each jaw is 1 mm, may be cases of overlap or separation of up to 2 mm. This implies heterogeneity of doses up to 40% in the joint area. This paper presents an accurate method of calibration of the jaws so as to obtain homogeneous dose distributions when using this type of treatment. (Author)

  3. Implementation of a deterministic dose calculation method in targeted radionuclide therapy

    International Nuclear Information System (INIS)

    Reiner, D.

    2009-01-01

    Targeted Radionuclide Therapy (TRT) is a relatively new therapy form for the selective destruction of small malign tumors and micro-metastases. The principle rests upon the administration of unsealed radioactive compounds which are coupled to a carrier vehicle. Through these tumor-seeking tracer molecules the radionuclides are deposited exactly at the target region where they kill the diseased cells by irradiating them from inside. External beam therapy and Brachytherapy employ established treatment planning systems for the accurate determination of the delivered dose in order to maximize the therapeutic benefit. No such systems exist for TRT to date although different dose calculation methodologies have been approached for over fifty years. Especially the state-of-the-art medical imaging techniques like combined PET/CT or SPECT/CT devices offer a great potential for the development of modern therapy planning systems. Principally two different calculation approaches exist for the determination of absorbed dose estimates. Stochastic methods like Monte Carlo calculations are proven to be very reliable but unfortunately they consume huge computation times for a single clinical scenario from hours to days. Therefore only deterministic methods are feasible for daily clinical applications, since the physicians require a basis for fast decision-making. This work introduces a deterministic dose calculation method for TRT which is based on the convolution of the cumulated activity distribution matrix with the particular discrete dose kernel of the emitter. The convolution is accomplished by Fast Fourier Transform in order to speed up the calculation. The voxel model of a spherical tumor is assumed to be enriched with radiopharmaceuticals homogeneously and inhomogeneously, respectively. The same scenario has been implemented in MCNP5 in order to test the reliability of the convolution method. The comparison of the results shows that especially for short range radionuclides

  4. Neutron fluence-to-dose equivalent conversion factors: a comparison of data sets and interpolation methods

    International Nuclear Information System (INIS)

    Sims, C.S.; Killough, G.G.

    1983-01-01

    Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)

  5. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  6. Optimization in radiotherapy treatment planning thanks to a fast dose calculation method

    International Nuclear Information System (INIS)

    Yang, Mingchao

    2014-01-01

    This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues. The treatment planning aims to determine the best suited radiation parameters for each patient's treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multi-criteria with linear constraints. The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient's phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization. (author) [fr

  7. Technical Note: The impact of deformable image registration methods on dose warping.

    Science.gov (United States)

    Qin, An; Liang, Jian; Han, Xiao; O'Connell, Nicolette; Yan, Di

    2018-03-01

    The purpose of this study was to investigate the clinical-relevant discrepancy between doses warped by pure image based deformable image registration (IM-DIR) and by biomechanical model based DIR (BM-DIR) on intensity-homogeneous organs. Ten patients (5Head&Neck, 5Prostate) were included. A research DIR tool (ADMRIE_v1.12) was utilized for IM-DIR. After IM-DIR, BM-DIR was carried out for organs (parotids, bladder, and rectum) which often encompass sharp dose gradient. Briefly, high-quality tetrahedron meshes were generated and deformable vector fields (DVF) from IM-DIR were interpolated to the surface nodes of the volume meshes as boundary condition. Then, a FEM solver (ABAQUS_v6.14) was used to simulate the displacement of internal nodes, which were then interpolated to image-voxel grids to get the more physically plausible DVF. Both geometrical and subsequent dose warping discrepancies were quantified between the two DIR methods. Target registration discrepancy(TRD) was evaluated to show the geometry difference. The re-calculated doses on second CT were warped to the pre-treatment CT via two DIR. Clinical-relevant dose parameters and γ passing rate were compared between two types of warped dose. The correlation was evaluated between parotid shrinkage and TRD/dose discrepancy. The parotid shrunk to 75.7% ± 9% of its pre-treatment volume and the percentage of volume with TRD>1.5 mm) was 6.5% ± 4.7%. The normalized mean-dose difference (NMDD) of IM-DIR and BM-DIR was -0.8% ± 1.5%, with range (-4.7% to 1.5%). 2 mm/2% passing rate was 99.0% ± 1.4%. A moderate correlation was found between parotid shrinkage and TRD and NMDD. The bladder had a NMDD of -9.9% ± 9.7%, with BM-DIR warped dose systematically higher. Only minor deviation was observed for rectum NMDD (0.5% ± 1.1%). Impact of DIR method on treatment dose warping is patient and organ-specific. Generally, intensity-homogeneous organs, which undergo larger deformation/shrinkage during

  8. Development of fluorescent, oscillometric and photometric methods to determine absorbed dose in irradiated fruits and nuts

    International Nuclear Information System (INIS)

    Kovacs, A.; Foeldiak, G.; Hargittai, P.; Miller, S.D.

    2001-01-01

    To ensure suitable quality control at food irradiation technologies and for quarantine authorities, simple routine dosimetry methods are needed for absorbed dose control. Taking into account the requirements at quarantine locations these methods would require nondestructive analysis for repeated measurements. Different dosimetry systems with different analytical evaluation methods have been tested and/or developed for absorbed dose measurements in the dose range of 0.1-10 kGy. In order to use the well accepted ethanolmonochlorobenzene dosimeter solution and the recently developed aqueous alanine solution in small volume sealed vials, a new portable, digital, and programmable oscillometric reader was developed. To make use of the availability of the very sensitive fluorimetric evaluation method, liquid and solid inorganic and organic dosimetry systems were developed for dose control using a new routine, portable, and computer controlled fluorimeter. Absorption or transmission photometric methods were also applied for dose measurements of solid or liquid phase dosimeter systems containing radiochromic dye agents, which change colour upon irradiation. (author)

  9. The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies.

    Science.gov (United States)

    Zohar, S; Chevret, S

    2001-10-15

    The continual reassessment method (CRM) provides a Bayesian estimation of the maximum tolerated dose (MTD) in phase I clinical trials and is also used to estimate the minimal efficacy dose (MED) in phase II clinical trials. In this paper we propose Bayesian stopping rules for the CRM, based on either posterior or predictive probability distributions that can be applied sequentially during the trial. These rules aim at early detection of either the mis-choice of dose range or a prefixed gain in the point estimate or accuracy of estimated probability of response associated with the MTD (or MED). They were compared through a simulation study under six situations that could represent the underlying unknown dose-response (either toxicity or failure) relationship, in terms of sample size, probability of correct selection and bias of the response probability associated to the MTD (or MED). Our results show that the stopping rules act correctly, with early stopping by using the two first rules based on the posterior distribution when the actual underlying dose-response relationship is far from that initially supposed, while the rules based on predictive gain functions provide a discontinuation of inclusions whatever the actual dose-response curve after 20 patients on average, that is, depending mostly on the accumulated data. The stopping rules were then applied to a data set from a dose-ranging phase II clinical trial aiming at estimating the MED dose of midazolam in the sedation of infants during cardiac catheterization. All these findings suggest the early use of the two first rules to detect a mis-choice of dose range, while they confirm the requirement of including at least 20 patients at the same dose to reach an accurate estimate of MTD (MED). A two-stage design is under study. Copyright 2001 John Wiley & Sons, Ltd.

  10. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  11. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    International Nuclear Information System (INIS)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  12. Development of a method to estimate organ doses for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Papadakis, Antonios E., E-mail: apapadak@pagni.gr; Perisinakis, Kostas; Damilakis, John [Department of Medical Physics, University Hospital of Heraklion, Faculty of Medicine, University of Crete, P.O. Box 1352, Iraklion, Crete 71110 (Greece)

    2016-05-15

    Purpose: To develop a method for estimating doses to primarily exposed organs in pediatric CT by taking into account patient size and automatic tube current modulation (ATCM). Methods: A Monte Carlo CT dosimetry software package, which creates patient-specific voxelized phantoms, accurately simulates CT exposures, and generates dose images depicting the energy imparted on the exposed volume, was used. Routine head, thorax, and abdomen/pelvis CT examinations in 92 pediatric patients, ranging from 1-month to 14-yr-old (49 boys and 43 girls), were simulated on a 64-slice CT scanner. Two sets of simulations were performed in each patient using (i) a fixed tube current (FTC) value over the entire examination length and (ii) the ATCM profile extracted from the DICOM header of the reconstructed images. Normalized to CTDI{sub vol} organ dose was derived for all primary irradiated radiosensitive organs. Normalized dose data were correlated to patient’s water equivalent diameter using log-transformed linear regression analysis. Results: The maximum percent difference in normalized organ dose between FTC and ATCM acquisitions was 10% for eyes in head, 26% for thymus in thorax, and 76% for kidneys in abdomen/pelvis. In most of the organs, the correlation between dose and water equivalent diameter was significantly improved in ATCM compared to FTC acquisitions (P < 0.001). Conclusions: The proposed method employs size specific CTDI{sub vol}-normalized organ dose coefficients for ATCM-activated and FTC acquisitions in pediatric CT. These coefficients are substantially different between ATCM and FTC modes of operation and enable a more accurate assessment of patient-specific organ dose in the clinical setting.

  13. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  14. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  15. Multiple methods for assessing the dose to skin exposed to radioactive contamination

    International Nuclear Information System (INIS)

    Dubeau, J.; Heinmiller, B.E.; Corrigan, M.

    2017-01-01

    There is the possibility for a worker at a nuclear installation, such as a nuclear power reactor, a fuel production facility or a medical facility, to come in contact with radioactive contaminants. When such an event occurs, the first order of business is to care for the worker by promptly initiating a decontamination process. Usually, the radiation protection personnel performs a G-M pancake probe measurement of the contamination in situ and collects part or all of the radioactive contamination for further laboratory analysis. The health physicist on duty must then perform, using the available information, a skin dose assessment that will go into the worker's permanent dose record. The contamination situations are often complex and the dose assessment can be laborious. This article compares five dose assessment methods that involve analysis, new technologies and new software. The five methods are applied to 13 actual contamination incidents consisting of direct skin contact, contamination on clothing and contamination on clothing in the presence of an air gap between the clothing and the skin. This work shows that, for the cases studied, the methods provided dose estimates that were usually within 12% (1σ) of each other, for those cases where absolute activity information for every radionuclide was available. One method, which relies simply on a G-M pancake probe measurement, appeared to be particularly useful in situations where a contamination sample could not be recovered for laboratory analysis. (authors)

  16. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Directory of Open Access Journals (Sweden)

    Jodi L. Pawluski

    2014-01-01

    Full Text Available Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1 cookie and (2 osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat.

  17. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol

    International Nuclear Information System (INIS)

    Homolka, P.; Kudler, H.; Nowotny, R.; Gahleitner, A.; Wien Univ.

    2001-01-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 μSv can be achieved, which compares to values typically obtained with panoramic radiography (26 μSv). A CT scan of the mandible, respectively, gives 123 μSv comparable to a full mouth survey with intraoral films (150 μSv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 μSv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [de

  18. Target volume uncertainty and a method to visualize its effect on the target dose prescription

    International Nuclear Information System (INIS)

    McCormick, Traci; Dink, Delal; Orcun, Seza; Pekny, Joseph; Rardin, Ron; Baxter, Larry; Thai, Van; Langer, Mark

    2004-01-01

    Purpose: To consider the uncertainty in the construction of target boundaries for optimization, and to demonstrate how the principles of mathematical programming can be applied to determine and display the effect on the tumor dose of making small changes to the target boundary. Methods: The effect on the achievable target dose of making successive small shifts to the target boundary within its range of uncertainty was found by constructing a mixed-integer linear program that automated the placement of the beam angles using the initial target volume. Results: The method was demonstrated using contours taken from a nasopharynx case, with dose limits placed on surrounding structures. In the illustrated case, enlarging the target anteriorly to provide greater assurance of disease coverage did not force a sacrifice in the minimum or mean tumor doses. However, enlarging the margin posteriorly, near a critical structure, dramatically changed the minimum, mean, and maximum tumor doses. Conclusion: Tradeoffs between the position of the target boundary and the minimum target dose can be developed using mixed-integer programming, and the results projected as a guide to contouring and plan selection

  19. Comparison between evaluating methods about the protocols of different dose distributions in radiotherapy

    International Nuclear Information System (INIS)

    Ju Yongjian; Chen Meihua; Sun Fuyin; Zhang Liang'an; Lei Chengzhi

    2004-01-01

    Objective: To study the relationship between tumor control probability (TCP) or equivalent uniform dose (EUD) and the heterogeneity degree of the dose changes with variable biological parameter values of the tumor. Methods: According to the definitions of TCP and EUD, calculating equations were derived. The dose distributions in the tumor were assumed to be Gaussian ones. The volume of the tumor was divided into several voxels, and the absorbed doses of these voxels were simulated by Monte Carlo methods. Then with the different values of radiosensitivity (α) and potential doubling time of the clonogens (T p ), the relationships between TCP or EUD and the standard deviation of dose (S d ) were evaluated. Results: The TCP-S d curves were influenced by the variable α and T p values, but the EUD-S d curves showed little variation. Conclusion: When the radiotherapy protocols with different dose distributions are compared, if the biological parameter values of the tumor have been known exactly, it's better to use the TCP, otherwise the EUD will be preferred

  20. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  1. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  2. Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods

    International Nuclear Information System (INIS)

    Allott, Rob

    2003-01-01

    Dear Sir: I read with interest the paper by W C Camplin, G P Brownless, G D Round, K Winpenny and G J Hunt from the Centre for Environment, Fisheries and Aquaculture Science (CEFAS) on 'Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods' in the December 2002 issue of this journal (J. Radiol. Prot. 22 371-88). The Environment Agency has a keen interest in the development of a robust methodology for assessing total doses which have been received by members of the public from authorised discharges of radioactive substances to the environment. Total dose in this context means the dose received from all authorised discharges and all exposure pathways (e.g. inhalation, external irradiation from radionuclides in sediment/soil, direct radiation from operations on a nuclear site, consumption of food etc). I chair a 'total retrospective dose assessment' working group with representatives from the Scottish Environment Protection Agency (SEPA), Food Standards Agency (FSA), National Radiological Protection Board, CEFAS and BNFL which began discussing precisely this issue during 2002. This group is a sub-group of the National Dose Assessment Working Group which was set up in April 2002 (J. Radiol. Prot. 22 318-9). The Environment Agency, Food Standards Agency and the Nuclear Installations Inspectorate previously undertook joint research into the most appropriate methodology to use for total dose assessment (J J Hancox, S J Stansby and M C Thorne 2002 The Development of a Methodology to Assess Population Doses from Multiple Source and Exposure Pathways of Radioactivity (Environment Agency R and D Technical Report P3-070/TR). This work came to broadly the same conclusion as the work by CEFAS, that an individual dose method is probably the most appropriate method to use. This research and that undertaken by CEFAS will help the total retrospective dose assessment working group refine a set of principles and a methodology for the

  3. A simple method for dose measurements in a biological irradiation facility

    International Nuclear Information System (INIS)

    Zarand, P.

    1973-01-01

    Changes in dose rate were investigated caused by reactor poisoning and burning up in a biological irradiation facility. Measurements were made by a GM counter monitoring system previously described. The absorbed-dose rate in mice was calculated from the kerma rate. Absorbed neutron dose plotted versus effective neutron fluence gives a straight line even with values measured using different filters in various core configurations. Curve representing the effect of reactor poisoning on neutron dose-rate shows a maximum and the difference found after a four day period does not exceed 5%. Calculations described permit a more precise planning of experiments and their intercomparison than either activation technique or ionization method. (B.A.)

  4. Development of a method to calculate organ doses for the upper gastrointestinal fluoroscopic examination

    International Nuclear Information System (INIS)

    Suleiman, O.H.

    1989-01-01

    A method was developed to quantitatively measure the upper gastrointestinal fluoroscopic examination in order to calculate organ doses. The dynamic examination was approximated with a set of discrete x-ray fields. Once the examination was segmented into discrete x-ray fields appropriate organ dose tables were generated using an existing computer program for organ dose calculations. This, along with knowledge of the radiation exposures associated with each of the fields, enabled the calculation of organ doses for the entire dynamic examination. The protocol involves videotaping the examination while fluoroscopic technique factors, tube current and tube potential, are simultaneously recorded on the audio tracks of the videotape. Subsequent analysis allows the dynamic examination to be segmented into a series of discrete x-ray fields uniquely defined by field size, projection, and anatomical region. The anatomical regions associated with the upper gastrointestinal examination were observed to be the upper, middle, and lower esophagus, the gastroesophageal junction, the stomach, and the duodenum

  5. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities

    International Nuclear Information System (INIS)

    Costa, Alessandro Martins da

    1999-01-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  6. Evaluation of Patient Radiation Dose during Cardiac Interventional Procedures: What Is the Most Effective Method?

    International Nuclear Information System (INIS)

    Chida, K.; Saito, H.; Ishibashi, T.; Zuguchi, M.; Kagaya, Y.; Takahashi, S.

    2009-01-01

    Cardiac interventional radiology has lower risks than surgical procedures. This is despite the fact that radiation doses from cardiac intervention procedures are the highest of any commonly performed general X-ray examination. Maximum radiation skin doses (MSDs) should be determined to avoid radiation-associated skin injuries in patients undergoing cardiac intervention procedures. However, real-time evaluation of MSD is unavailable for many cardiac intervention procedures. This review describes methods of determining MSD during cardiac intervention procedures. Currently, in most cardiac intervention procedures, real-time measuring of MSD is not feasible. Thus, we recommend that physicians record the patient's total entrance skin dose, such as the dose at the interventional reference point when it can be monitored, in order to estimate MSD in intervention procedures

  7. Methods to verify absorbed dose of irradiated containers and evaluation of dosimeters

    International Nuclear Information System (INIS)

    Gao Meixu; Wang Chuanyao; Tang Zhangxong; Li Shurong

    2001-01-01

    The research on dose distribution in irradiated food containers and evaluation of several methods to verify absorbed dose were carried out. The minimum absorbed dose of treated five orange containers was in the top of the highest or in the bottom of lowest container. D max /D min in this study was 1.45 irradiated in a commercial 60 Co facility. The density of orange containers was about 0.391g/cm 3 . The evaluation of dosimeters showed that the PMMA-YL and clear PMMA dosimeters have linear relationship with dose response, and the word NOT in STERIN-125 and STERIN-300 indicators were covered completely at the dosage of 125 and 300 Gy respectively. (author)

  8. Repeated dose titration versus age-based method in electroconvulsive therapy: a pilot study.

    Science.gov (United States)

    Aten, Jan Jaap; Oudega, Mardien; van Exel, Eric; Stek, Max L; van Waarde, Jeroen A

    2015-06-01

    In electroconvulsive therapy (ECT), a dose titration method (DTM) was suggested to be more individualized and therefore more accurate than formula-based dosing methods. A repeated DTM (every sixth session and dose adjustment accordingly) was compared to an age-based method (ABM) regarding treatment characteristics, clinical outcome, and cognitive functioning after ECT. Thirty-nine unipolar depressed patients dosed using repeated DTM and 40 matched patients treated with ABM were compared. Montgomery-Åsberg Depression Rating Scale (MADRS) and Mini-Mental State Examination (MMSE) were assessed at baseline and at the end of the index course, as well as the total number of ECT sessions. Both groups were similar regarding age, sex, psychotic features, mean baseline MADRS, and median baseline MMSE. At the end of the index course, the two methods showed equal outcome (mean end MADRS, 11.6 ± 8.3 in DTM and 9.5 ± 7.6 in ABM (P = 0.26); median end MMSE, 28 (25-29) and 28 (25-29.8), respectively (P = 0.81). However, the median number of all ECT sessions differed 16 (11-22) in DTM versus 12 (10-14.8) in ABM; P = 0.02]. Using regression analysis, dosing method and age were independently associated with the total number of ECT sessions, with less sessions needed in ABM (P = 0.02) and in older patients (P = 0.001). In this comparative cohort study, ABM and DTM showed equal outcome for depression and cognition. However, the median ECT course duration in repeated DTM appeared longer. Additionally, higher age was associated with shorter ECT courses regardless of the dosing method. Further prospective studies are needed to confirm these findings.

  9. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  10. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  11. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  12. Rapid radiological characterization method based on the use of dose coefficients

    International Nuclear Information System (INIS)

    Dulama, C.; Toma, Al.; Dobrin, R.; Valeca, M.

    2010-01-01

    Intervention actions in case of radiological emergencies and exploratory radiological surveys require rapid methods for the evaluation of the range and extent of contamination. When simple and homogeneous radionuclide composition characterize the radioactive contamination, surrogate measurements can be used to reduce the costs implied by laboratory analyses and to speed-up the process of decision support. A dose-rate measurement-based methodology can be used in conjunction with adequate dose coefficients to assess radionuclide inventories and to calculate dose projections for various intervention scenarios. The paper presents the results obtained for dose coefficients in some particular exposure geometries and the methodology used for deriving dose rate guidelines from activity concentration upper levels specified as contamination limits. All calculations were performed by using the commercial software MicroShield from Grove Software Inc. A test case was selected as to meet the conditions from EPA Federal Guidance Report no. 12 (FGR12) concerning the evaluation of dose coefficients for external exposure from contaminated soil and the obtained results were compared to values given in the referred document. The geometries considered as test cases are: contaminated ground surface; - infinite extended homogeneous surface contamination and soil contaminated to a depth of 15 cm. As shown by the results, the values agree within 50% relative difference for most of the cases. The greatest discrepancies were observed for depth contamination simulation and in the case of radionuclides with complicated gamma emission and this is due to the different approach from MicroShield and FGR12. A case study is presented for validation of the methodology, where both dose rate measurements and laboratory analyses were performed on an extended quasi-homogeneous NORM contamination. The dose rate estimations obtained by applying the dose coefficients to the radionuclide concentrations

  13. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  14. Towards global benchmarking of food environments and policies to reduce obesity and diet-related non-communicable diseases: design and methods for nation-wide surveys.

    Science.gov (United States)

    Vandevijvere, Stefanie; Swinburn, Boyd

    2014-05-15

    Unhealthy diets are heavily driven by unhealthy food environments. The International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS) has been established to reduce obesity, NCDs and their related inequalities globally. This paper describes the design and methods of the first-ever, comprehensive national survey on the healthiness of food environments and the public and private sector policies influencing them, as a first step towards global monitoring of food environments and policies. A package of 11 substudies has been identified: (1) food composition, labelling and promotion on food packages; (2) food prices, shelf space and placement of foods in different outlets (mainly supermarkets); (3) food provision in schools/early childhood education (ECE) services and outdoor food promotion around schools/ECE services; (4) density of and proximity to food outlets in communities; food promotion to children via (5) television, (6) magazines, (7) sport club sponsorships, and (8) internet and social media; (9) analysis of the impact of trade and investment agreements on food environments; (10) government policies and actions; and (11) private sector actions and practices. For the substudies on food prices, provision, promotion and retail, 'environmental equity' indicators have been developed to check progress towards reducing diet-related health inequalities. Indicators for these modules will be assessed by tertiles of area deprivation index or school deciles. International 'best practice benchmarks' will be identified, against which to compare progress of countries on improving the healthiness of their food environments and policies. This research is highly original due to the very 'upstream' approach being taken and its direct policy relevance. The detailed protocols will be offered to and adapted for countries of varying size and income in order to establish INFORMAS globally as a new monitoring initiative

  15. Evaluation of Deformable Image Registration Methods for Dose Monitoring in Head and Neck Radiotherapy

    Directory of Open Access Journals (Sweden)

    Bastien Rigaud

    2015-01-01

    Full Text Available In the context of head and neck cancer (HNC adaptive radiation therapy (ART, the two purposes of the study were to compare the performance of multiple deformable image registration (DIR methods and to quantify their impact for dose accumulation, in healthy structures. Fifteen HNC patients had a planning computed tomography (CT0 and weekly CTs during the 7 weeks of intensity-modulated radiation therapy (IMRT. Ten DIR approaches using different registration methods (demons or B-spline free form deformation (FFD, preprocessing, and similarity metrics were tested. Two observers identified 14 landmarks (LM on each CT-scan to compute LM registration error. The cumulated doses estimated by each method were compared. The two most effective DIR methods were the demons and the FFD, with both the mutual information (MI metric and the filtered CTs. The corresponding LM registration accuracy (precision was 2.44 mm (1.30 mm and 2.54 mm (1.33 mm, respectively. The corresponding LM estimated cumulated dose accuracy (dose precision was 0.85 Gy (0.93 Gy and 0.88 Gy (0.95 Gy, respectively. The mean uncertainty (difference between maximal and minimal dose considering all the 10 methods to estimate the cumulated mean dose to the parotid gland (PG was 4.03 Gy (SD = 2.27 Gy, range: 1.06–8.91 Gy.

  16. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  17. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations.

    Science.gov (United States)

    Moore, Bria M; Brady, Samuel L; Mirro, Amy E; Kaufman, Robert A

    2014-07-01

    To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5-55 kg). Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF(organ)(SSDE)) were then multiplied by patient-specific SSDE to estimate patient organ dose. The [CF(organ)(SSDE)) were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5-55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. Individual CF(organ)(SSDE) were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7-1.4) and abdominopelvic region (average 0.9; range 0.7-1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1-0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF(organ)(SSDE), was compared to previously published pediatric patient doses that accounted for

  18. Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment

    International Nuclear Information System (INIS)

    Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.

    2002-01-01

    During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO

  19. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  20. Two non-parametric methods for derivation of constraints from radiotherapy dose-histogram data

    Science.gov (United States)

    Ebert, M. A.; Gulliford, S. L.; Buettner, F.; Foo, K.; Haworth, A.; Kennedy, A.; Joseph, D. J.; Denham, J. W.

    2014-07-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose-histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization.

  1. Blind method of clustering for the evaluation of the dose received by personnel in two methods of administration of radiopharmaceuticals

    International Nuclear Information System (INIS)

    VerdeVelasco, J. M.; Gonzalez Gonzalez, M.; Montes Fuentes, C.; Verde Velasco, J.; Gonzalez Blanco, F. J.; Ramos Pacho, J. A.

    2013-01-01

    The difficulty for the injection of drugs marked with radioactive isotopes while syringe is located within the lead protector does that in many cases staff do it chooses to use the syringe outside the lead protector, increasing therefore the dose of radiation received. In our service raises the possibility of using a different methodology, channeling a pathway through a catheter, which allows administer, in all cases, with the syringe within the lead guard. We will check if significant differences can be seen both in the dose absorbed by the staff as in the time it takes to perform the administration of the drug using the method proposed compared injection without guard. (Author)

  2. Reevaluation of nasal swab method for dose estimation at nuclear emergency accident

    International Nuclear Information System (INIS)

    Yamada, Yuji; Fukutsu, Kumiko; Kurihara, Osamu; Akashi, Makoto

    2008-01-01

    ICRP Publication 66 human respiratory tract model has been used extensively over in exposure dose assessment. It is well known that respiratory deposition efficiency of inhaled aerosol and its deposition region strongly depend on the particle size. In most of exposure accidents, however, nobody knows a size of inhaled aerosol. And thus two default aerosol sizes of 5μ in AMAD for the workers and 1μ in AMAD for the public are given as being representative in the ICRP model, but both sizes are not linked directly to the maximum dose. In this study, the most hazardous size to our health effects and how to estimate an intake activity was discussed from a viewpoint of emergency medicine. In exposure accident of alpha emitter such as Pu-239, lung monitor and bioassay measurements are not the best methods for rapid estimation with high sensitivity, so that an applicability of nasal swab method has been investigated. A computer software, LUDEP, was used in the calculation of respiratory deposition. It showed that the effective dose per unit intake activity strongly depended on the inhaled aerosol size. In case of Pu-239 dioxide aerosols, it was confirmed that the maximum of dose conversion factor was observed around 0.01μ. It means that this 0.01μ is the most hazardous size at exposure accident of Pu-239. From analysis of the relationship between AI and ET l deposition, it was found that the dose conversion factor from the activity deposited in ET l region also was affected by the aerosol size. The usage of the ICRP's default size in nasal swab method might cause obvious underestimation of the intake activity. Dose estimation based on nasal swab method is possible from safety side at nuclear emergency, and the availability in quantity should be reevaluated for emergency medicine considering of chelating agent administration. (author)

  3. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  4. Moving gantry method for electron beam dose profile measurement at extended source-to-surface distances.

    Science.gov (United States)

    Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla

    2015-03-08

    A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles.

  5. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  6. Variability of dose predictions for cesium-137 and radium-226 using the PRISM method

    International Nuclear Information System (INIS)

    Bergstroem, U.; Andersson, K.; Roejder, B.

    1984-01-01

    The uncertainty associated with dose predictions for cesium-137 and radium-226 in a specific ecosystem has been studied. The method used is a systematic method for determining the effect of parameter uncertainties on model prediction called PRISM. The ecosystems studied are different types of lakes where the following transport processes are included: runoff of water in the lake, irrigation, transport in soil, in groundwater and in sediment. The ecosystems are modelled by the compartment principle, using the BIOPATH-code. Seven different internal exposure pathways are included. The total dose commitment for both nuclides varies about two orders of magnitude. For cesium-137 the total dose and the uncertainty are dominated by the consumption of fish. The most important factor to the total uncertainty is the concentration factor water-fish. For radium-226 the largest contributions to the total dose are the exposure pathways, fish, milk and drinking-water. Half of the uncertainty lies in the milk dose. This uncertainty is dominated by the distribution factor for milk. (orig.)

  7. Determination of the delivered hemodialysis dose using standard methods and on-line clearance monitoring

    Directory of Open Access Journals (Sweden)

    Vlatković Vlastimir

    2006-01-01

    Full Text Available Background/aim: Delivered dialysis dose has a cumulative effect and significant influence upon the adequacy of dialysis, quality of life and development of co-morbidity at patients on dialysis. Thus, a great attention is given to the optimization of dialysis treatment. On-line Clearance Monitoring (OCM allows a precise and continuous measurement of the delivered dialysis dose. Kt/V index (K = dialyzer clearance of urea; t = dialysis time; V = patient's total body water, measured in real time is used as a unit for expressing the dialysis dose. The aim of this research was to perform a comparative assessment of the delivered dialysis dose by the application of the standard measurement methods and a module for continuous clearance monitoring. Methods. The study encompassed 105 patients who had been on the chronic hemodialysis program for more than three months, three times a week. By random choice, one treatment per each controlled patient was taken. All the treatments understood bicarbonate dialysis. The delivered dialysis dose was determined by the calculation of mathematical models: Urea Reduction Ratio (URR singlepool index Kt/V (spKt/V and by the application of OCM. Results. Urea Reduction Ratio was the most sensitive parameter for the assessment and, at the same time, it was in the strongest correlation with the other two, spKt/V indexes and OCM. The values pointed out an adequate dialysis dose. The URR values were significantly higher in women than in men, p < 0.05. The other applied model for the delivered dialysis dose measurement was Kt/V index. The obtained values showed that the dialysis dose was adequate, and that, according to this parameter, the women had significantly better dialysis, then the men p < 0.05. According to the OCM, the average value was slightly lower than the adequate one. The women had a satisfactory dialysis according to this index as well, while the delivered dialysis dose was insufficient in men. The difference

  8. Dosimetric methods for and influence of exposure parameters on the establishment of reference doses in mammography

    NARCIS (Netherlands)

    Zoetelief, J.; Fitzgerald, M.; Leitz, W.; Säbel, M.

    1998-01-01

    For the establishment of reference doses in mammography it is important to apply a dosimetric model relevant for risk assessment. Differences in dosimetric methods applied in mammography are related to the dosemeters used, e.g. thermoluminescent detectors and ionisation chambers, and the dosimetric

  9. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    Science.gov (United States)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  10. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    Energy Technology Data Exchange (ETDEWEB)

    Benton, E.R. [Eril Research, Inc., Stillwater, Oklahoma (United States); Deme, S.; Apathy, I. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    2006-07-01

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca{sub 2}SO{sub 4}:Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET{sub {infinity}}H{sub 2}O {>=} 10 keV/{mu}m, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component {>=} 10 keV/{mu}m measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/{mu}m measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 {mu}Gy/day and dose equivalent rates ranging from 340 to 450 {mu}Sv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  11. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    International Nuclear Information System (INIS)

    Benton, E.R.; Deme, S.; Apathy, I.

    2006-01-01

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca 2 SO 4 :Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET ∞ H 2 O ≥ 10 keV/μm, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component ≥ 10 keV/μm measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/μm measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 μGy/day and dose equivalent rates ranging from 340 to 450 μSv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  12. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    Silva, Frank Sinatra Gomes da

    2008-02-01

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  13. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  14. Determination method of inactivating minimal dose of gama radiation for Salmonella typhimurium

    International Nuclear Information System (INIS)

    Araujo, E.S.; Campos, H. de; Silva, D.M.

    1979-01-01

    A method for determination of minimal inactivating dose (MID) with Salmonella typhimurium is presented. This is a more efficient way to improve the irradiated vaccines. The MID found for S. thyphimurium 6.616 by binomial test was 0.55 MR. The method used allows to get a definite value for MID and requires less consumption of material, work and time in comparison with the usual procedure [pt

  15. Retrospective methods of dose assessment of the Chernobyl 'liquidators'. A comparison

    International Nuclear Information System (INIS)

    Schmidt, M.; Ziggel, H.; Schmitz-Feuerhaake, I.; Dannheim, B.; Schikalov, V.; Usatyj, A.; Shevchenko, V.; Snigireva, G.; Serezhenkov, V.; Klevezal, G.

    1998-01-01

    A database of biomedical and dosimetric data of participants in the liquidation work at Chernobyl was set up. Dose profiles were created by using suitable dose modelling. EPR spectrometric measurements of the tooth enamel was performed as a routine method of retrospective dosimetry for radiation workers at medium to low exposures. Chromosome analyses were carried out in peripheral blood lymphocytes of a cohort of the liquidation workers. Fluorescence in-situ hybridization was also used. The number of workers volunteering to take part in the research, however, was too small to allow statistically relevant results to be obtained. (P.A.)

  16. A robustness analysis method with fast estimation of dose uncertainty distributions for carbon-ion therapy treatment planning.

    Science.gov (United States)

    Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku

    2016-08-07

    A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the

  17. The rationale and a computer evaluation of a gamma irradiation sterilization dose determination method for medical devices using a substerilization incremental dose sterility test protocol.

    Science.gov (United States)

    Davis, K W; Strawderman, W E; Whitby, J L

    1984-08-01

    The experimental procedure described is designed to allow calculation of the radiation sterilization dose for medical devices to any desired standard of sterility assurance. The procedure makes use of the results of a series of sterility tests on device samples exposed to doses of radiation from 0.2 to 1.8 Mrad in 0.2 Mrad increments. From the sterility test data a 10(-2) sterility level dose is determined. A formula is described that allows a value called DS Mrad to be calculated. This is an estimate of the effective radiation resistance of the heterogeneous microbial population remaining in the tail portion of the inactivation curve at the 10(-2) dose and above. DS Mrad is used as a D10 value and is applied, in conjunction with the 10(-2) sterility level dose, to an extrapolation factor to estimate a sufficient radiation sterilization dose. A computer simulation of the substerilization process has been carried out. This has allowed an extensive evaluation of the procedure, and the sterilization dose obtained from calculation to be compared with the actual dose required. Good agreement was obtained with most microbial populations examined, but examples of both overdosing and underdosing were found with microbial populations containing a proportion of organisms displaying pronounced shoulder inactivation kinetics. The method allows the radiation sterilization dose to be derived from the natural resistance of the microbial population to gamma sterilization.

  18. Passive Rn dose meters - measuring methods appropriate for large measurement series

    International Nuclear Information System (INIS)

    Urban, M.; Kiefer, H.

    1985-01-01

    Passive integrating measuring methods can be classified in several groups by their functioning principle, e.g. spray chambers or open chambers with nuclear trace detectors or TL detectors, open detectors, activated carbon dose meters with or without TL detectors. According to the functioning principle, only radon or radon and fission products can be detected. The lecture gives a survey of the present state of development of passive Rn dose meters. By the example of the Ra dose meter developed at Karlsruhe which was used in inquiry measurements carried out in Germany, Switzerland, the Netherlands, Belgium and Austria, etching technology, estimation of measuring uncertainties, reproducibility and fading behaviour shall be discussed. (orig./HP) [de

  19. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures.

    Science.gov (United States)

    Puncher, M; Birchall, A; Bull, R K

    2012-08-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q(0.025) and Q(0.975) quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-72 hr. The advantages and disadvantages of the method are discussed.

  20. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2010-01-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  1. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2011-01-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  2. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    Science.gov (United States)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  3. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, Richard [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2010-07-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  4. Radiation dose determines the method for quantification of DNA double strand breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra [University of Belgrade, Vinča Institute of Nuclear Sciences, Belgrade (Serbia); Todorović, Danijela, E-mail: dtodorovic@medf.kg.ac.rs [University of Kragujevac, Faculty of Medical Sciences, Kragujevac (Serbia)

    2016-03-15

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  5. Radiation dose determines the method for quantification of DNA double strand breaks

    International Nuclear Information System (INIS)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra; Todorović, Danijela

    2016-01-01

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  6. Radiation dose determines the method for quantification of DNA double strand breaks.

    Science.gov (United States)

    Bulat, Tanja; Keta, Otilija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra; Todorović, Danijela

    2016-03-01

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci.

  7. Radiation dose determines the method for quantification of DNA double strand breaks

    Directory of Open Access Journals (Sweden)

    TANJA BULAT

    2016-03-01

    Full Text Available ABSTRACT Ionizing radiation induces DNA double strand breaks (DSBs that trigger phosphorylation of the histone protein H2AX (γH2AX. Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany. Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci.

  8. Optically Stimulated Luminescence Analysis Method for High Dose Rate Using an Optical Fiber Type Dosimeter

    Science.gov (United States)

    Ueno, Katsunori; Tominaga, Kazuo; Tadokoro, Takahiro; Ishizawa, Koji; Takahashi, Yoshinori; Kuwabara, Hitoshi

    2016-08-01

    The investigation of air dose rates at locations in the Fukushima Dai-ichi Nuclear Power Station is necessary for safe removal of the molten nuclear fuel. The target performance for the investigation is to analyze a dose rate in the range of 10-3 Gy/h to 102 Gy/h with a measurement precision of ±4.0% full scale (F.S.) at a measurement interval of 60 s. In order to achieve this target, the authors proposed an optically stimulated luminescence (OSL) analysis method using prompt OSL for a wide dynamic range of dose rates; the OSL is generated using BaFBr:Eu with a fast decay time constant. The luminescence intensity by prompt OSL was formulated by the electron concentration of the trapping state during gamma ray and stimulation light irradiations. The prototype OSL monitor using BaFBr:Eu was manufactured for investigation of prompt OSL and evaluation of the measurement precision. The time dependence of the luminescence intensity by prompt OSL was analyzed by irradiating the OSL sensor in a 60Co irradiation facility. The measured dose rates were obtained in a prompt mode and an accumulating mode with a precision of ±3.3% F.S. for the dose rate range of 9.5 ×10-4 Gy/h to 1.2 ×102 Gy/h.

  9. A feasible method for clinical delivery verification and dose reconstruction in tomotherapy

    International Nuclear Information System (INIS)

    Kapatoes, J.M.; Olivera, G.H.; Ruchala, K.J.; Smilowitz, J.B.; Reckwerdt, P.J.; Mackie, T.R.

    2001-01-01

    Delivery verification is the process in which the energy fluence delivered during a treatment is verified. This verified energy fluence can be used in conjunction with an image in the treatment position to reconstruct the full three-dimensional dose deposited. A method for delivery verification that utilizes a measured database of detector signal is described in this work. This database is a function of two parameters, radiological path-length and detector-to-phantom distance, both of which are computed from a CT image taken at the time of delivery. Such a database was generated and used to perform delivery verification and dose reconstruction. Two experiments were conducted: a simulated prostate delivery on an inhomogeneous abdominal phantom, and a nasopharyngeal delivery on a dog cadaver. For both cases, it was found that the verified fluence and dose results using the database approach agreed very well with those using previously developed and proven techniques. Delivery verification with a measured database and CT image at the time of treatment is an accurate procedure for tomotherapy. The database eliminates the need for any patient-specific, pre- or post-treatment measurements. Moreover, such an approach creates an opportunity for accurate, real-time delivery verification and dose reconstruction given fast image reconstruction and dose computation tools

  10. Minimum dose method for walking-path planning of nuclear facilities

    International Nuclear Information System (INIS)

    Liu, Yong-kuo; Li, Meng-kun; Xie, Chun-li; Peng, Min-jun; Wang, Shuang-yu; Chao, Nan; Liu, Zhong-kun

    2015-01-01

    Highlights: • For radiation environment, the environment model is proposed. • For the least dose walking path problem, a path-planning method is designed. • The path-planning virtual–real mixed simulation program is developed. • The program can plan walking path and simulate. - Abstract: A minimum dose method based on staff walking road network model was proposed for the walking-path planning in nuclear facilities. A virtual–reality simulation program was developed using C# programming language and Direct X engine. The simulation program was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method was effective in providing safety for people walking in nuclear facilities

  11. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  12. Novel method based on Fricke gel dosimeters for dose verification in IMRT techniques

    International Nuclear Information System (INIS)

    Aon, E.; Brunetto, M.; Sansogne, R.; Castellano, G.; Valente, M.

    2008-01-01

    Modern radiotherapy is becoming increasingly complex. Conformal and intensity modulated (IMRT) techniques are nowadays available for achieving better tumour control. However, accurate methods for 3D dose verification for these modern irradiation techniques have not been adequately established yet. Fricke gel dosimeters consist, essentially, in a ferrous sulphate (Fricke) solution fixed to a gel matrix, which enables spatial resolution. A suitable radiochromic marker (xylenol orange) is added to the solution in order to produce radiochromic changes within the visible spectrum range, due to the chemical internal conversion (oxidation) of ferrous ions to ferric ions. In addition, xylenol orange has proved to slow down the internal diffusion effect of ferric ions. These dosimeters suitably shaped in form of thin layers and optically analyzed by means of visible light transmission imaging have recently been proposed as a method for 3D absorbed dose distribution determinations in radiotherapy, and tested in several IMRT applications employing a homogeneous plane (visible light) illuminator and a CCD camera with a monochromatic filter for sample analysis by means of transmittance images. In this work, the performance of an alternative read-out method is characterized, consisting on visible light images, acquired before and after irradiation by means of a commercially available flatbed-like scanner. Registered images are suitably converted to matrices and analyzed by means of dedicated 'in-house' software. The integral developed method allows performing 1D (profiles), 2D (surfaces) and 3D (volumes) dose mapping. In addition, quantitative comparisons have been performed by means of the Gamma composite criteria. Dose distribution comparisons between Fricke gel dosimeters and traditional standard dosimetric techniques for IMRT irradiations show an overall good agreement, supporting the suitability of the method. The agreement, quantified by the gamma index (that seldom

  13. Dose conversion factors for radiation doses at normal operation discharges. F. Methods report; Dosomraekningsfaktorer foer normaldriftutslaepp. F. Metodrapport

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, Ulla; Hallberg, Bengt; Karlsson, Sara

    2001-10-01

    A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway.

  14. Radiation doses in diagnostic radiology and methods for dose reduction. Report of a co-ordinated research programme (1991-1993)

    International Nuclear Information System (INIS)

    1995-04-01

    It is well recognized that diagnostic radiology is the largest contributor to the collective dose from all man-made sources of radiation. Large differences in radiation doses from the same procedures among different X ray rooms have led to the conclusion that there is a potential for dose reduction. A Co-ordinated Research Programme on Radiation Doses in Diagnostic Radiology and Methods for Dose Reduction, involving Member States with different degrees of development, was launched by the IAEA in co-operation with the CEC. This report summarizes the results of the second and final Research Co-ordination Meeting held in Vienna from 4 to 8 October 1993. 22 refs, 6 figs and tabs

  15. ARN Training on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Rojo, A.M.; Gomez Parada, I.; Puerta Yepes, N.; Gossio, S.

    2010-01-01

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. This paper resumes the main characteristics of this activity. (authors) [es

  16. Study on method of dose estimation for the Dual-moderated neutron survey meter

    International Nuclear Information System (INIS)

    Zhou, Bo; Li, Taosheng; Xu, Yuhai; Gong, Cunkui; Yan, Qiang; Li, Lei

    2013-01-01

    In order to study neutron dose measurement in high energy radiation field, a Dual-moderated survey meter in the range from 1 keV to 300 MeV mean energies spectra has been developed. Measurement results of some survey meters depend on the neutron spectra characteristics in different neutron radiation fields, so the characteristics of the responses to various neutron spectra should be studied in order to get more reasonable dose. In this paper the responses of the survey meter were calculated under different neutron spectra data from IAEA of Technical Reports Series No. 318 and other references. Finally one dose estimation method was determined. The range of the reading per H*(10) for the method estimated is about 0.7–1.6 for the neutron mean energy range from 50 keV to 300 MeV. -- Highlights: • We studied a novel high energy neutron survey meter. • Response characteristics of the survey meter were calculated by using a series of neutron spectra. • One significant advantage of the survey meter is that it can provide mean energy of radiation field. • Dose estimate deviation can be corrected. • The range of corrected reading per H*(10) is about 0.7–1.6 for the neutron fluence mean energy range from 0.05 MeV to 300 MeV

  17. Statistical analysis of dose heterogeneity in circulating blood: Implications for sequential methods of total body irradiation

    International Nuclear Information System (INIS)

    Molloy, Janelle A.

    2010-01-01

    Purpose: Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these ''sequential'' techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Methods: Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. Results: The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than ±10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times

  18. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  19. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  20. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  2. Exposure rate by the spectrum dose index method using plastic scintillator detectors.

    Science.gov (United States)

    Proctor, Alan; Wellman, Jeffrey

    2012-04-01

    The spectrum dose index (SDI) method was tested for use with data from plastic scintillator detectors by irradiating a typical portal detector system using different gamma sources and natural background. Measurements were compared with exposure rates simultaneously measured using a calibrated pressurised ion chamber. It was found that a modified SDI algorithm could be used to calculate exposure rates for these detectors despite the lack of photopeaks in plastic scintillator spectra.

  3. Safety objectives for nuclear power plants in terms of dose-frequency targets; a comparison exercise performed by the Commission of the European Communities on dose assessment within a licencing framework

    International Nuclear Information System (INIS)

    Lange, F.; Tolley, B.; Kelly, N.; Harbison, S.; Gilby, E.

    1987-01-01

    The Task Force on Safety Objectives (T.F.S.O.) of the CEC has initiated a benchmark exercise with the purpose to review the methods and data used in dose assessment being adopted in various countries to estimate doses from design basis accidents of nuclear power plants within a regulatory framework. This benchmark exercise forms one of the initiatives of the T.F.S.O. to enable a comprehensive intercomparison of the degree of coherence between the dose-frequency targets used in different Members States for application to design basis accidents. The structure, contents and results of the benchmark exercise in which eight countries/institutions participated are described. Some of the more important findings and conclusions and the relation to a parallel benchmark exercise on source terms for design basis accidents are discussed. (orig.)

  4. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...... to a number of constrains. The objective is to develop an efficient and optimal control strategy....

  5. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...

  6. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  7. Doses and application methods of Azospirillum brasilense in irrigated upland rice

    Directory of Open Access Journals (Sweden)

    Nayara F. S. Garcia

    Full Text Available ABSTRACT The study was carried out in Selvíria-MS, in the 2011/12 and 2012/13 agricultural years, aiming to evaluate the efficiency of Azospirillum brasilense in nitrogen fixation in upland rice, as a function of doses and application methods of the inoculant containing this diazotrophic bacterium. The experimental design was randomized blocks, arranged in a 4 x 4 factorial scheme, with 4 doses of inoculant (control without inoculation, 100, 200 and 300 mL of the commercial product ha-1 and 4 application methods (seed inoculation, application in the sowing furrow, soil spraying after sowing, and foliar spraying at the beginning of plant tillering, with 4 replicates. During the experiment, the agronomic characteristics, production components and yield of the rice crop were evaluated. It was concluded that the inoculant containing Azospirillum brasilense promotes increase (19% in the yield of upland rice under sprinkler irrigation when used at the dose of 200 mL ha-1, regardless of the application methods.

  8. Dose rate evaluation of body phantom behind ITER bio-shield wall using Monte Carlo method

    International Nuclear Information System (INIS)

    Beheshti, A.; Jabbari, I.; Karimian, A.; Abdi, M.

    2012-01-01

    One of the most critical risks to humans in reactors environment is radiation exposure. Around the tokamak hall personnel are exposed to a wide range of particles, including neutrons and photons. International Thermonuclear Experimental Reactor (ITER) is a nuclear fusion research and engineering project, which is the most advanced experimental tokamak nuclear fusion reactor. Dose rates assessment and photon radiation due to the neutron activation of the solid structures in ITER is important from the radiological point of view. Therefore, the dosimetry considered in this case is based on the Deuterium-Tritium (DT) plasma burning with neutrons production rate at 14.1 MeV. The aim of this study is assessment the amount of radiation behind bio-shield wall that a human received during normal operation of ITER by considering neutron activation and delay gammas. To achieve the aim, the ITER system and its components were simulated by Monte Carlo method. Also to increase the accuracy and precision of the absorbed dose assessment a body phantom were considered in the simulation. The results of this research showed that total dose rates level near the outside of bio-shield wall of the tokamak hall is less than ten percent of the annual occupational dose limits during normal operation of ITER and It is possible to learn how long human beings can remain in that environment before the body absorbs dangerous levels of radiation. (authors)

  9. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  10. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  11. SIMPLE METHOD OF SIZE-SPECIFIC DOSE ESTIMATES CALCULATION FROM PATIENT WEIGHT ON COMPUTED TOMOGRAPHY.

    Science.gov (United States)

    Iriuchijima, Akiko; Fukushima, Yasuhiro; Nakajima, Takahito; Tsushima, Yoshito; Ogura, Akio

    2018-01-01

    The purpose of this study is to develop a new and simple methodology for calculating mean size-specific dose estimates (SSDE) over the entire scan range (mSSDE) from weight and volume CT dose index (CTDIvol). We retrospectively analyzed data from a dose index registry. Scan areas were divided into two regions: chest and abdomen-pelvis. The original mSSDE was calculated by a commercially available software. The conversion formulas for mSSDE were estimated from weight and CTDIvol (SSDEweight) in each region. SSDEweight were compared with the original mSSDE using Bland-Altman analysis. Root mean square differences were 1.4 mGy for chest and 1.5 mGy for abdomen-pelvis. Our method using formulae can calculate SSDEweight using weight and CTDIvol without a dedicated software, and can be used to calculate DRL to optimize CT exposure doses. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A continuous OSL scanning method for analysis of radiation depth-dose profiles in bricks

    DEFF Research Database (Denmark)

    Bøtter-Jensen, L.; Jungner, H.; Poolton, N.R.J.

    1995-01-01

    This article describes the development of a method for directly measuring radiation depth-dose profiles from brick, tile and porcelain cores, without the need for sample separation techniques. For the brick cores, examples are shown of the profiles generated by artificial irradiation using...... the different photon energies from Cs-137 and Co-60 gamma sources; comparison is drawn with both the theoretical calculations derived from Monte Carlo simulations, as well as experimental measurements made using more conventional optically stimulated luminescence methods of analysis....

  13. A method for checking high dose rate treatment times for vaginal applicators.

    Science.gov (United States)

    Mayo, C S; Ulin, K

    2001-01-01

    A method is presented for checking the treatment time calculation for high dose rate (HDR) vaginal cylinder treatments. The method represents an independent check of the HDR planning system and can take into account nonuniform isodose line coverage around the cylinder. Only the air kerma strength of the source and information that is available from the written directive are required. The maximum discrepancy for a representative set of cylinder plans done on a Nucletron unit was 5%. A working HTML JavaScript program is included in the Appendix.

  14. Evaluation of medium-dose UVA1 phototherapy in localized scleroderma with the cutometer and fast Fourier transform method

    NARCIS (Netherlands)

    de Rie, M. A.; Enomoto, D. N. H.; de Vries, H. J. C.; Bos, J. D.

    2003-01-01

    Purpose: To evaluate the efficacy of medium-dose UVA1 phototherapy in patients with localized scleroderma. Method: A controlled pilot study with medium-dose UVA1 (48 J/cm(2)) was performed. The results were evaluated by means of a skin score and two objective methods for quantifying sclerosis

  15. Best Practices in Stability Indicating Method Development and Validation for Non-clinical Dose Formulations.

    Science.gov (United States)

    Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W

    2016-11-01

    Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.

  16. BAYESIAN DATA AUGMENTATION DOSE FINDING WITH CONTINUAL REASSESSMENT METHOD AND DELAYED TOXICITY

    Science.gov (United States)

    Liu, Suyu; Yin, Guosheng; Yuan, Ying

    2014-01-01

    A major practical impediment when implementing adaptive dose-finding designs is that the toxicity outcome used by the decision rules may not be observed shortly after the initiation of the treatment. To address this issue, we propose the data augmentation continual re-assessment method (DA-CRM) for dose finding. By naturally treating the unobserved toxicities as missing data, we show that such missing data are nonignorable in the sense that the missingness depends on the unobserved outcomes. The Bayesian data augmentation approach is used to sample both the missing data and model parameters from their posterior full conditional distributions. We evaluate the performance of the DA-CRM through extensive simulation studies, and also compare it with other existing methods. The results show that the proposed design satisfactorily resolves the issues related to late-onset toxicities and possesses desirable operating characteristics: treating patients more safely, and also selecting the maximum tolerated dose with a higher probability. The new DA-CRM is illustrated with two phase I cancer clinical trials. PMID:24707327

  17. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities; Metodos de calibracao e de intercomparacao de calibradores de dose utilizados em servicos de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alessandro Martins da

    1999-07-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  18. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  19. Individualized drug dosing using RBF-Galerkin method: Case of anemia management in chronic kidney disease.

    Science.gov (United States)

    Mirinejad, Hossein; Gaweda, Adam E; Brier, Michael E; Zurada, Jacek M; Inanc, Tamer

    2017-09-01

    Anemia is a common comorbidity in patients with chronic kidney disease (CKD) and is frequently associated with decreased physical component of quality of life, as well as adverse cardiovascular events. Current treatment methods for renal anemia are mostly population-based approaches treating individual patients with a one-size-fits-all model. However, FDA recommendations stipulate individualized anemia treatment with precise control of the hemoglobin concentration and minimal drug utilization. In accordance with these recommendations, this work presents an individualized drug dosing approach to anemia management by leveraging the theory of optimal control. A Multiple Receding Horizon Control (MRHC) approach based on the RBF-Galerkin optimization method is proposed for individualized anemia management in CKD patients. Recently developed by the authors, the RBF-Galerkin method uses the radial basis function approximation along with the Galerkin error projection to solve constrained optimal control problems numerically. The proposed approach is applied to generate optimal dosing recommendations for individual patients. Performance of the proposed approach (MRHC) is compared in silico to that of a population-based anemia management protocol and an individualized multiple model predictive control method for two case scenarios: hemoglobin measurement with and without observational errors. In silico comparison indicates that hemoglobin concentration with MRHC method has less variation among the methods, especially in presence of measurement errors. In addition, the average achieved hemoglobin level from the MRHC is significantly closer to the target hemoglobin than that of the other two methods, according to the analysis of variance (ANOVA) statistical test. Furthermore, drug dosages recommended by the MRHC are more stable and accurate and reach the steady-state value notably faster than those generated by the other two methods. The proposed method is highly efficient for

  20. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  1. Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters

    International Nuclear Information System (INIS)

    Duran M, H. A.; Hernandez O, M.; Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R.; Pinedo S, A.; Ventura M, J.; Chacon, F.; Rivera M, T.

    2009-10-01

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO 2 +PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  2. Use of rank sum method in identifying high occupational dose jobs for ALARA implementation

    International Nuclear Information System (INIS)

    Cho, Yeong Ho; Kang, Chang Sun

    1998-01-01

    The cost-effective reduction of occupational radiation exposure (ORE) dose at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORE dose data of existing plants. It is necessary to identify what are high ORE jobs for ALARA implementation. In this study, the Rank Sum Method (RSM) is used in identifying high ORE jobs. As a case study, the database of ORE-related maintenance and repair jobs for Kori Units 3 and 4 is used for assessment, and top twenty high ORE jobs are identified. The results are also verified and validated using the Friedman test, and RSM is found to be a very efficient way of analyzing the data. (author)

  3. A method for radiobiological investigations in radiation fields with different LET and high dose rates

    International Nuclear Information System (INIS)

    Grundler, W.

    1976-01-01

    For investigations: 1. Performed in the field of radiobiology with different LET-radiation and a relatively high background dose rate of one component (e.g. investigations with fast and intermediate reactor neutrons) 2. Concerning radiation risk studies within a wide range 3. Of irradiations, covering a long time period (up to 100 days) a test system is necessary which on the one hand makes it possible to analyze the influence of different LET radiation and secondly shows a relative radiation resistant behaviour and allows a simple cell cycle regulation. A survey is given upon the installed device of a simple cell observation method, the biological test system used and the analysis of effects caused by dose, repair and LET. It is possible to analyze the behaviour of the nonsurvival cells and to demonstrate different reactions of the test parameters to the radiation of different LET. (author)

  4. Comparison of passive and active radon measurement methods for personal occupational dose assessment

    Directory of Open Access Journals (Sweden)

    Hasanzadeh Elham

    2016-01-01

    Full Text Available To compare the performance of the active short-term and passive long-term radon measurement methods, a study was carried out in several closed spaces, including a uranium mine in Iran. For the passive method, solid-state nuclear track detectors based on Lexan polycarbonate were utilized, for the active method, AlphaGUARD. The study focused on the correlation between the results obtained for estimating the average indoor radon concentrations and consequent personal occupational doses in various working places. The repeatability of each method was investigated, too. In addition, it was shown that the radon concentrations in different stations of the continually ventilated uranium mine were comparable to the ground floor laboratories or storage rooms (without continual ventilation and lower than underground laboratories.

  5. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  6. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  7. Experimental method for calculation of effective doses in interventional radiology; Metodo experimental para calculo de dosis efectivas en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz Lblanca, M. D.; Diaz Romero, F.; Casares Magaz, O.; Garrido Breton, C.; Catalan Acosta, A.; Hernandez Armas, J.

    2013-07-01

    This paper proposes a method that allows you to calculate the effective dose in any interventional radiology procedure using an anthropomorphic mannequin Alderson RANDO and dosimeters TLD 100 chip. This method has been applied to an angio Radiology procedure: the biliary drainage. The objectives that have been proposed are: to) put together a method that, on an experimental basis, allows to know dosis en organs to calculate effective dose in complex procedures and b) apply the method to the calculation of the effective dose of biliary drainage. (Author)

  8. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  9. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Hatch, G.E.; Koren, H.; Aissa, M.

    1989-01-01

    Present models for predicting the pulmonary toxicity of O 3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O 3 mass balance and species comparisons of mechanisms that protect tissue against O 3 . The goal of the study described was to identify a method to directly compare O 3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O 3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18 O-labeled O 3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O 3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O 3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18 O 3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O 3 to alveoli of animals or humans

  10. A Method for Correcting the Calibration Factor Used in the TLD Dose Calculation Algorithm

    International Nuclear Information System (INIS)

    Shin, S.; Jin, H.; Son, J.; Song, M.

    1999-01-01

    The method is described for estimating calibration factors used in the TLD neutron dose calculation algorithm in order to assess the personal neutron dose equivalent to radiation workers in a nuclear power plant in accordance with ICRP 60 recommendations. Neutron spectra were measured at several locations inside the reactor containment building of Youngkwang Unit 4 in Korea by using a Bonner multisphere spectrometer (BMS) system. Based on the fractional distribution of measured neutron fluence, four locations were selected for in situ TLD calibration. TL responses for the four selected locations were calculated from the measured spectra and the reported fit response function of TLD-600. TL responses were also measured with Harshaw type 8806 albedo dosemeters mounted on the water phantom, and compared with the calculated TL responses. From the responses measured with Harshaw 8806 TLDs thermal neutron fluence was evaluated, and used to adjust the neutron spectrum obtained with BMS. TL responses calculated for the adjusted neutron spectra showed an excellent consistency with the measured TL responses within 15% difference. Neutron calibration factors were calculated for the measured neutron spectra and the D 2 O-moderated 252 Cf spectrum, and used to calculate correction factors, which ranged from 2.38 to 11.18. The correction factor estimated in this way for the known neutron spectrum at an area can be conveniently used to calculate the personal dose equivalent at the area from the calibration factor obtained for a calibration neutron spectrum. (author)

  11. Method for the evaluation of a average glandular dose in mammography

    International Nuclear Information System (INIS)

    Okunade, Akintunde Akangbe

    2006-01-01

    This paper concerns a method for accurate evaluation of average glandular dose (AGD) in mammography. At different energies, the interactions of photons with tissue are not uniform. Thus, optimal accuracy in the estimation of AGD is achievable when the evaluation is carried out using the normalized glandular dose values, g(x,E), that are determined for each (monoenergetic) x-ray photon energy, E, compressed breast thickness (CBT), x, breast glandular composition, and data on photon energy distribution of the exact x-ray beam used in breast imaging. A generalized model for the values of g(x,E) that is for any arbitrary CBT ranging from 2 to 9 cm (with values that are not whole numbers inclusive, say, 4.2 cm) was developed. Along with other dosimetry formulations, this was integrated into a computer software program, GDOSE.FOR, that was developed for the evaluation of AGD received from any x-ray tube/equipment (irrespective of target-filter combination) of up to 50 kVp. Results are presented which show that the implementation of GDOSE.FOR yields values of normalized glandular dose that are in good agreement with values obtained from methodologies reported earlier in the literature. With the availability of a portable device for real-time acquisition of spectra, the model and computer software reported in this work provide for the routine evaluation of AGD received by a specific woman of known age and CBT

  12. Study on the method or reducing the operator's exposure dose from a C-Arm system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Sik; Song, Jong Nam [Dept. of Radiological Science, Dongshin University, Naju (Korea, Republic of); Kim, Seung Ok [Dept. of Radiology, Catholic Kwangdong Universty International ST.Mary' s Hospital, Incheon (Korea, Republic of)

    2016-12-15

    In this study, C-Arm equipment is being used as we intend to verify the exposure dose on the operator by the scattering rays during the operation of the C-Arm equipment and to provide an effective method of reducing the exposure dose. Exposure dose is less than the Over Tube method utilizes the C-arm equipment Under Tube the scheme, The result showed that the exposure dose on the operator decreased with a thicker shield, and as the operator moved away from the center line. Moreover, as the research time prolongated, the exposure dose increased, and among the three affixed location of the dosimeter, the most exposure dose was measured at gonadal, then followed by chest and thyroid. However, in consideration of the relationship between the operator and the patient, the distance cannot be increased infinitely and the research time cannot be decreased infinitely in order to reduce the exposure dose. Therefore, by changing the thickness of the radiation shield, the exposure dose on the operator was able to be reduced. If you are using a C-Arm equipment discomfort during surgery because the grounds that the procedure is neglected and close to the dose of radiation shielding made can only increase. Because a separate control room cannot be used for the C-Arm equipment due to its characteristic, the exposure dose on the operator needs to be reduced by reinforcing the shield through an appropriate thickness of radiation shield devices, such as apron, etc. during a treatment.

  13. Method for calculation of upper limit internal alpha dose rates to aquatic organisms with application of plutonium-239 in plankton

    International Nuclear Information System (INIS)

    Paschoa, A.S.; Baptista, G.B.

    1977-01-01

    A method for the calculation of upper limit internal alpha dose rates to aquatic organisms is presented. The mean alpha energies per disintegration of radionuclides of interest are listed to be used in standard methodologies to calculate dose to aquatic biota. As an application, the upper limits for the alpha dose rates from 239 Pu to the total body of plankton are estimated based on data available in open literature [pt

  14. Regulatory guide relating to the determination of whole-body doses due to internal radiation exposure (principles and methods)

    International Nuclear Information System (INIS)

    Bogner, L.; Graffunder, H.; Henrichs, K.; Kraut, W.; Nosske, D.; Roth, P.; Sahre, P.

    1993-01-01

    This compilation defines the principles and methods to be applied for determining the doses emanating from internal radiation exposure in persons with dose levels exceeding the critical levels defined in the ''Regulatory guide for health physics controls''. The obligatory procedure is intended to guarantee that measurements and interpretations of personnel doses and intakes are done on a standardized basis by a standardized procedure, so as to obtain comparable results. (orig.) [de

  15. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    Science.gov (United States)

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.

  16. Pediatric Stroke and transcranial Direct Current Stimulation: Methods for Rational Individualized Dose Optimization

    Directory of Open Access Journals (Sweden)

    Bernadette T Gillick

    2014-09-01

    Full Text Available Background- Transcranial direct current stimulation (tDCS has been investigated mainly in adults and doses may not be appropriate in pediatric applications. In perinatal stroke where potential applications are promising, rational adaptation of dosage for children remains under investigation.Objective - Construct child-specific tDCS dosing parameters through case study within a perinatal stroke tDCS safety and feasibility trial. Methods- 10-year-old subject with a diagnosis of presumed perinatal ischemic stroke and hemiparesis was identified. T1 MRI scans used to derive computerized model for current flow and electrode positions. Workflow using modeling results and consideration of dosage in previous clinical trials was incorporated. Prior Ad hoc adult montages versus de novo optimized montages provided distinct risk benefit analysis. Approximating adult dose required consideration of changes in both peak brain current flow and distribution which further tradeoff between maximizing efficacy and adding safety factors. Electrode size, position, current intensity, compliance voltage, and duration were controlled independently in this process.Results- Brain electric fields modeled and compared to values previously predicted models. Approximating conservative brain current flow patterns and intensities used in previous adult trials for comparable indications, the optimal current intensity established was 0.7 mA for 10 minutes with a tDCS C3/C4 montage. Specifically 0.7 mA produced comparable peak brain current intensity of an average adult receiving 1.0 mA. Electrode size of 5x7 cm2 with 1.0 mA and low-voltage tDCS was employed to maximize tolerability. Safety and feasibility confirmed with subject tolerating the session well and no serious adverse events.Conclusion- Rational approaches to dose customization, with steps informed by computational modeling, may improve guidance for pediatric stroke tDCS trials.

  17. Ideas on a practical method to make more uniform the measure and the account of doses

    International Nuclear Information System (INIS)

    Boussard, P.; Dollo, R.; De Kerviller, M.; Penneroux, M.

    1992-01-01

    The ICRP 60 publication and its consequences on the revision of CEC regulations and basic norms, discussions on dosimetry of outside workers and more generally on the development of exchanges of information between users have led EDF to question its practices for measuring counting doses. Faced with this wide range of french practices and in a desire for harmonisation, an EDF and CEA work team has established a summary of present methods, an evaluation of the consequences of these different strategies and have then suggested a harmonisation of dosimetric measures based on systematic methodology. (author)

  18. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    Science.gov (United States)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  19. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  20. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  1. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  2. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  3. Application of Monte Carlo method for dose calculation in thyroid follicle; Aplicacao de metodo Monte Carlo para calculos de dose em foliculos tiroideanos

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Frank Sinatra Gomes da

    2008-02-15

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 {mu}m. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  4. Investigation of real tissue water equivalent path lengths using an efficient dose extinction method

    Science.gov (United States)

    Zhang, Rongxiao; Baer, Esther; Jee, Kyung-Wook; Sharp, Gregory C.; Flanz, Jay; Lu, Hsiao-Ming

    2017-07-01

    For proton therapy, an accurate conversion of CT HU to relative stopping power (RSP) is essential. Validation of the conversion based on real tissue samples is more direct than the current practice solely based on tissue substitutes and can potentially address variations over the population. Based on a novel dose extinction method, we measured water equivalent path lengths (WEPL) on animal tissue samples to evaluate the accuracy of CT HU to RSP conversion and potential variations over a population. A broad proton beam delivered a spread out Bragg peak to the samples sandwiched between a water tank and a 2D ion-chamber detector. WEPLs of the samples were determined from the transmission dose profiles measured as a function of the water level in the tank. Tissue substitute inserts and Lucite blocks with known WEPLs were used to validate the accuracy. A large number of real tissue samples were measured. Variations of WEPL over different batches of tissue samples were also investigated. The measured WEPLs were compared with those computed from CT scans with the Stoichiometric calibration method. WEPLs were determined within  ±0.5% percentage deviation (% std/mean) and  ±0.5% error for most of the tissue surrogate inserts and the calibration blocks. For biological tissue samples, percentage deviations were within  ±0.3%. No considerable difference (extinction measurement took around 5 min to produce ~1000 WEPL values to be compared with calculations. This dose extinction system measures WEPL efficiently and accurately, which allows the validation of CT HU to RSP conversions based on the WEPL measured for a large number of samples and real tissues.

  5. Evaluating doses of multi-slice CT in brain examinations using various methods.

    Science.gov (United States)

    Lin, Hung-Chih; Lai, Te-Jen; Tseng, Hsien-Chun; Lin, Cheng-Hsun; Tseng, Yen-Ling; Chen, Chien-Yi

    2017-12-01

    The effective dose (H E ) and organ or tissue equivalent dose (H T ) of a Rando phantom undergoing two brain computed tomography (CT) examination protocols were evaluated using thermoluminescent dosimeters (TLD-100H) and dose length product (DLP) methods. TLDs were inserted into the correlated positions of an organ or tissue of Rando phantom, such as thyroid, brain, and salivary gland, using (A) axial scan: scanning the maxillae ranging from external auditory meatus to the parietal bone, and (B) helical scan: scanning from the mandible to the parietal bone. CT examinations were performed on a Philips computer tomography (Brilliance CT) at Lukang Christian Hospital. TLDs were measured using a Harshaw 3500 TLD reader. The HT of organ and tissue during the two protocols was discussed. H E were calculated using ICRP 60 and 103 at 2.67 ± 0.18 and 1.89 ± 0.23 mSv based on an axial scan, and 4.70 ± 0.38 and 4.39 ± 0.37 mSv based on a helical scan, respectively. In the DLP method, H E was estimated from CTDI vol that was recorded directly from the console display of the CT unit and then calculated using AAPM 96. Finally, experimental results are compared with those in literature. Radiologists should choose and adjust protocols to prevent unnecessary radiation to patients and satisfying the as low as reasonably achievable (ALARA) principle. These findings will be valuable to patients, physicians, radiologists, and the public.

  6. {sup 99m}Tc Auger electrons - Analysis on the effects of low absorbed doses by computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Tavares, Adriana Alexandre S., E-mail: adriana_tavares@msn.co [Faculdade de Engenharia da Universidade do Porto (FEUP), Rua Dr. Roberto Frias, S/N, 4200-465 Porto (Portugal); Tavares, Joao Manuel R.S., E-mail: tavares@fe.up.p [Faculdade de Engenharia da Universidade do Porto (FEUP), Rua Dr. Roberto Frias, S/N, 4200-465 Porto (Portugal)

    2011-03-15

    We describe here the use of computational methods for evaluation of the low dose effects on human fibroblasts after irradiation with Technetium-99m ({sup 99m}Tc) Auger electrons. The results suggest a parabolic relationship between the irradiation of fibroblasts with {sup 99m}Tc Auger electrons and the total absorbed dose. Additionally, the results on very low absorbed doses may be explained by the bystander effect, which has been implicated on the cell's effects at low doses. Further in vitro evaluation will be worthwhile to clarify these findings.

  7. Experimental validation of a kV source model and dose computation method for CBCT imaging in an anthropomorphic phantom.

    Science.gov (United States)

    Poirier, Yannick; Tambasco, Mauro

    2016-07-08

    We present an experimental validation of a kilovoltage (kV) X-ray source characterization model in an anthropomorphic phantom to estimate patient-specific absorbed dose from kV cone-beam computed tomography (CBCT) imaging procedures and compare these doses to nominal weighted CT-dose index (CTDIw) dose estimates. We simulated the default Varian on-board imager 1.4 (OBI) default CBCT imaging protocols (i.e., standard-dose head, low-dose thorax, pelvis, and pelvis spotlight) using our previously developed and easy to implement X-ray point-source model and source characterization approach. We used this characterized source model to compute absorbed dose in homogeneous and anthropomorphic phantoms using our previously validated in-house kV dose computation software (kVDoseCalc). We compared these computed absorbed doses to doses derived from ionization chamber measurements acquired at several points in a homogeneous cylindrical phantom and from thermoluminescent detectors (TLDs) placed in the anthropomorphic phantom. In the homogeneous cylindrical phantom, computed values of absorbed dose relative to the center of the phantom agreed with measured values within ≤2% of local dose, except in regions of high-dose gradient where the distance to agreement (DTA) was 2 mm. The computed absorbed dose in the anthropomorphic phantom generally agreed with TLD measurements, with an average percent dose difference ranging from 2.4% ± 6.0% to 5.7% ± 10.3%, depending on the characterized CBCT imaging protocol. The low-dose thorax and the standard dose scans showed the best and worst agreement, respectively. Our results also broadly agree with published values, which are approximately twice as high as the nominal CTDIw would suggest. The results demonstrate that our previously developed method for modeling and characterizing a kV X-ray source could be used to accurately assess patient-specific absorbed dose from kV CBCT procedures within reasonable accuracy, and serve as further

  8. A novel method of estimating dose responses for polymer gels using texture analysis of scanning electron microscopy images.

    Directory of Open Access Journals (Sweden)

    Cheng-Ting Shih

    Full Text Available Polymer gels are regarded as a potential dosimeter for independent validation of absorbed doses in clinical radiotherapy. Several imaging modalities have been used to convert radiation-induced polymerization to absorbed doses from a macro-scale viewpoint. This study developed a novel dose conversion mechanism by texture analysis of scanning electron microscopy (SEM images. The modified N-isopropyl-acrylamide (NIPAM gels were prepared under normoxic conditions, and were administered radiation doses from 5 to 20 Gy. After freeze drying, the gel samples were sliced for SEM scanning with 50×, 500×, and 3500× magnifications. Four texture indices were calculated based on the gray level co-occurrence matrix (GLCM. The results showed that entropy and homogeneity were more suitable than contrast and energy as dose indices for higher linearity and sensitivity of the dose response curves. After parameter optimization, an R (2 value of 0.993 can be achieved for homogeneity using 500× magnified SEM images with 27 pixel offsets and no outlier exclusion. For dose verification, the percentage errors between the prescribed dose and the measured dose for 5, 10, 15, and 20 Gy were -7.60%, 5.80%, 2.53%, and -0.95%, respectively. We conclude that texture analysis can be applied to the SEM images of gel dosimeters to accurately convert micro-scale structural features to absorbed doses. The proposed method may extend the feasibility of applying gel dosimeters in the fields of diagnostic radiology and radiation protection.

  9. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  10. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  11. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    Science.gov (United States)

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  12. Danish calculations of the NEACRP pin-power benchmark

    International Nuclear Information System (INIS)

    Hoejerup, C.F.

    1994-01-01

    This report describes calculations performed for the NEACRP pin-power benchmark. The calculations are made with the code NEM2D, a diffusion theory code based on the nodal expansion method. (au) (15 tabs., 15 ills., 5 refs.)

  13. Developing a multipoint titration method with a variable dose implementation for anaerobic digestion monitoring.

    Science.gov (United States)

    Salonen, K; Leisola, M; Eerikäinen, T

    2009-01-01

    Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.

  14. A new method for synthesizing radiation dose-response data from multiple trials applied to prostate cancer

    DEFF Research Database (Denmark)

    Diez, Patricia; Vogelius, Ivan S; Bentzen, Søren M

    2010-01-01

    A new method is presented for synthesizing dose-response data for biochemical control of prostate cancer according to study design (randomized vs. nonrandomized) and risk group (low vs. intermediate-high)....

  15. Standard Guide for Selection and Use of Mathematical Methods for Calculating Absorbed Dose in Radiation Processing Applications

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide describes different mathematical methods that may be used to calculate absorbed dose and criteria for their selection. Absorbed-dose calculations can determine the effectiveness of the radiation process, estimate the absorbed-dose distribution in product, or supplement or complement, or both, the measurement of absorbed dose. 1.2 Radiation processing is an evolving field and annotated examples are provided in Annex A6 to illustrate the applications where mathematical methods have been successfully applied. While not limited by the applications cited in these examples, applications specific to neutron transport, radiation therapy and shielding design are not addressed in this document. 1.3 This guide covers the calculation of radiation transport of electrons and photons with energies up to 25 MeV. 1.4 The mathematical methods described include Monte Carlo, point kernel, discrete ordinate, semi-empirical and empirical methods. 1.5 General purpose software packages are available for the calcul...

  16. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  17. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  18. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  19. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  20. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  1. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  2. bcrm: Bayesian Continual Reassessment Method Designs for Phase I Dose-Finding Trials

    Directory of Open Access Journals (Sweden)

    Michael Sweeting

    2013-09-01

    Full Text Available This paper presents the R package bcrm for conducting and assessing Bayesian continual reassessment method (CRM designs in Phase I dose-escalation trials. CRM designsare a class of adaptive design that select the dose to be given to the next recruited patient based on accumulating toxicity data from patients already recruited into the trial, often using Bayesian methodology. Despite the original CRM design being proposed in 1990, the methodology is still not widely implemented within oncology Phase I trials. The aim of this paper is to demonstrate, through example of the bcrm package, how a variety of possible designs can be easily implemented within the R statistical software, and how properties of the designs can be communicated to trial investigators using simple textual and graphical output obtained from the package. This in turn should facilitate an iterative process to allow a design to be chosen that is suitable to the needs of the investigator. Our bcrm package is the first to offer a large comprehensive choice of CRM designs, priors and escalation procedures, which can be easily compared and contrasted within the package through the assessment of operating characteristics.

  3. A method to combine three dimensional dose distributions for external beam and brachytherapy radiation treatments for gynecological neoplasms

    International Nuclear Information System (INIS)

    Narayana, V.; Sahijdak, W.M.; Orton, C.G.

    1997-01-01

    Purpose: Radiation treatment of gynecological neoplasms, such as cervical carcinoma, usually combines external radiation therapy with one or more intracavitary brachytherapy applications. Although the dose from external beam radiation therapy and brachytherapy can be calculated and displayed in 3D individually, the dose distributions are not combined. At most, combined point doses are calculated for select points using various time-dose models. In this study, we present a methodology to combine external beam and brachytherapy treatments for gynecological neoplasms. Material and Methods: Three dimensional bio-effect treatment planning to obtain complication probability has been outlined. CT scans of the patient's pelvis with the gynecological applicator in place are used to outline normal tissue and tumor volumes. 3D external beam and brachytherapy treatment plans are developed separately and an external beam dose matrix and a brachytherapy dose matrix was calculated. The dose in each voxel was assumed to be homogeneous. The physical dose in each voxel of the dose matrix was then converted into extrapolated response dose (ERD) based on the linear quadratic model that accounts for the dose per fraction, number of fractions, dose rate, and complete or incomplete repair of sublethal damage (time between fractions). The net biological dose delivered was obtained by summing the ERD grids from external beam and brachytherapy since there was complete repair of sublethal damage between external beam and brachytherapy treatments. The normal tissue complication probability and tumor control probability were obtained using the biological dose matrix based on the critical element model. Results: The outlined method of combining external beam and brachytherapy treatments was implemented on gynecological treatments using an applicator for brachytherapy treatments. Conclusion: Implementation of the biological dose calculation that combine different modalities is extremely useful

  4. Empirical Benchmarks of Hidden Bias in Educational Research: Implication for Assessing How well Propensity Score Methods Approximate Experiments and Conducting Sensitivity Analysis

    Science.gov (United States)

    Dong, Nianbo; Lipsey, Mark

    2014-01-01

    When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…

  5. A Benchmark Approach of Counterparty Credit Exposure of Bermudan Option under Lévy Process : The Monte Carlo-COS Method

    NARCIS (Netherlands)

    Shen, Y.; Van der Weide, J.A.M.; Anderluh, J.H.M.

    2013-01-01

    An advanced method, which we call Monte Carlo-COS method, is proposed for computing the counterparty credit exposure profile of Bermudan options under Lévy process. The different exposure profiles and exercise intensity under different mea- sures, P and Q, are discussed. Since the COS method [1

  6. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  7. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    International Nuclear Information System (INIS)

    Moore, B; Brady, S; Kaufman, R; Mirro, A

    2014-01-01

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE

  8. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)

    2014-06-15

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  9. Dose rate estimates and spatial interpolation maps of outdoor gamma dose rate with geostatistical methods; A case study from Artvin, Turkey

    International Nuclear Information System (INIS)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur

    2015-01-01

    In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.

  10. Evaluation of the stepwise collimation method for the reduction of the patient dose in full spine radiography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Boram [Korea University, Seoul (Korea, Republic of); Sun Medical Center, Daejeon (Korea, Republic of); Lee, Sunyoung [Sun Medical Center, Daejeon (Korea, Republic of); Yang, Injeong [Seoul National University Hospital Medical Center, Seoul (Korea, Republic of); Yoon, Myeonggeun [Korea University, Seoul (Korea, Republic of)

    2014-05-15

    The purpose of this study is to evaluate the dose reduction when using the stepwise collimation method for scoliosis patients undergoing full spine radiography. A Monte Carlo simulation was carried out to acquire dose vs. volume data for organs at risk (OAR) in the human body. While the effective doses in full spine radiography were reduced by 8, 15, 27 and 44% by using four different sizes of the collimation, the doses to the skin were reduced by 31, 44, 55 and 66%, indicating that the reduction of the dose to the skin is higher than that to organs inside the body. Although the reduction rates were low for the gonad, being 9, 14, 18 and 23%, there was more than a 30% reduction in the dose to the heart, suggesting that the dose reduction depends significantly on the location of the OARs in the human body. The reduction rate of the secondary cancer risk based on the excess absolute risk (EAR) varied from 0.6 to 3.4 per 10,000 persons, depending on the size of the collimation. Our results suggest that the stepwise collimation method in full spine radiography can effectively reduce the patient dose and the radiation-induced secondary cancer risk.

  11. Continual reassessment method for dose escalation clinical trials in oncology: a comparison of prior skeleton approaches using AZD3514 data.

    Science.gov (United States)

    James, Gareth D; Symeonides, Stefan N; Marshall, Jayne; Young, Julia; Clack, Glen

    2016-08-31

    The continual reassessment method (CRM) requires an underlying model of the dose-toxicity relationship ("prior skeleton") and there is limited guidance of what this should be when little is known about this association. In this manuscript the impact of applying the CRM with different prior skeleton approaches and the 3 + 3 method are compared in terms of ability to determine the true maximum tolerated dose (MTD) and number of patients allocated to sub-optimal and toxic doses. Post-hoc dose-escalation analyses on real-life clinical trial data on an early oncology compound (AZD3514), using the 3 + 3 method and CRM using six different prior skeleton approaches. All methods correctly identified the true MTD. The 3 + 3 method allocated six patients to both sub-optimal and toxic doses. All CRM approaches allocated four patients to sub-optimal doses. No patients were allocated to toxic doses from sigmoidal, two from conservative and five from other approaches. Prior skeletons for the CRM for phase 1 clinical trials are proposed in this manuscript and applied to a real clinical trial dataset. Highly accurate initial skeleton estimates may not be essential to determine the true MTD, and, as expected, all CRM methods out-performed the 3 + 3 method. There were differences in performance between skeletons. The choice of skeleton should depend on whether minimizing the number of patients allocated to suboptimal or toxic doses is more important. NCT01162395 , Trial date of first registration: July 13, 2010.

  12. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine is non...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  13. Environmental dose assessment methods for normal operations at DOE nuclear sites

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations.

  14. A passive dosing method to determine fugacity capacities and partitioning properties of leaves

    DEFF Research Database (Denmark)

    Bolinius, Damien Johann; Macleod, Matthew; McLachlan, Michael S.

    2016-01-01

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (Kleaf/PDMS) for intact leaves, using...... polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium Kleaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.......075 (PCB 180) to 0.371 (PCB 3). The Kleaf/PDMS data can readily be converted to fugacity capacities of leaves (Zleaf) and subsequently leaf/water or leaf/air partition ratios (Kleaf/water and Kleaf/air) using partitioning data from the literature. Results of our measurements are within the variability...

  15. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  16. Investigation of the HU-density conversion method and comparison of dose distribution for dose calculation on MV cone beam CT images

    International Nuclear Information System (INIS)

    Kim, Min Joo; Lee, Seu Ran; Suh, Tae Suk

    2011-01-01

    Modern radiation therapy techniques, such as Image-guided radiation therapy (IGRT), Adaptive radiation therapy (ART) has become a routine clinical practice on linear accelerators for the increase the tumor dose conformity and improvement of normal tissue sparing at the same time. For these highly developed techniques, megavoltage cone beam computed tomography (MVCBCT) system produce volumetric images at just one rotation of the x-ray beam source and detector on the bottom of conventional linear accelerator for real-time application of patient condition into treatment planning. MV CBCT image scan be directly registered to a reference CT data set which is usually kilo-voltage fan-beam computed tomography (kVFBCT) on treatment planning system and the registered image scan be used to adjust patient set-up error. However, to use MV CBCT images in radiotherapy, reliable electron density (ED) distribution are required. Patients scattering, beam hardening and softening effect caused by different energy application between kVCT, MV CBCT can cause cupping artifacts in MV CBCT images and distortion of Houns field Unit (HU) to ED conversion. The goal of this study, for reliable application of MV CBCT images into dose calculation, MV CBCT images was modified to correct distortion of HU to ED using the relationship of HU and ED from kV FBCT and MV CBCT images. The HU-density conversion was performed on MV CBCT image set using Dose difference map was showing in Figure 1. Finally, percentage differences above 3% were reduced depending on applying density calibration method. As a result, total error co uld be reduced to under 3%. The present study demonstrates that dose calculation accuracy using MV CBCT image set can be improved my applying HU-density conversion method. The dose calculation and comparison of dose distribution from MV CBCT image set with/without HU-density conversion method was performed. An advantage of this study compared to other approaches is that HU

  17. Investigation of the HU-density conversion method and comparison of dose distribution for dose calculation on MV cone beam CT images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Joo; Lee, Seu Ran; Suh, Tae Suk [Dept. of Biomedical Engineering, The Catholic University of Korea, Bucheon (Korea, Republic of)

    2011-11-15

    Modern radiation therapy techniques, such as Image-guided radiation therapy (IGRT), Adaptive radiation therapy (ART) has become a routine clinical practice on linear accelerators for the increase the tumor dose conformity and improvement of normal tissue sparing at the same time. For these highly developed techniques, megavoltage cone beam computed tomography (MVCBCT) system produce volumetric images at just one rotation of the x-ray beam source and detector on the bottom of conventional linear accelerator for real-time application of patient condition into treatment planning. MV CBCT image scan be directly registered to a reference CT data set which is usually kilo-voltage fan-beam computed tomography (kVFBCT) on treatment planning system and the registered image scan be used to adjust patient set-up error. However, to use MV CBCT images in radiotherapy, reliable electron density (ED) distribution are required. Patients scattering, beam hardening and softening effect caused by different energy application between kVCT, MV CBCT can cause cupping artifacts in MV CBCT images and distortion of Houns field Unit (HU) to ED conversion. The goal of this study, for reliable application of MV CBCT images into dose calculation, MV CBCT images was modified to correct distortion of HU to ED using the relationship of HU and ED from kV FBCT and MV CBCT images. The HU-density conversion was performed on MV CBCT image set using Dose difference map was showing in Figure 1. Finally, percentage differences above 3% were reduced depending on applying density calibration method. As a result, total error co uld be reduced to under 3%. The present study demonstrates that dose calculation accuracy using MV CBCT image set can be improved my applying HU-density conversion method. The dose calculation and comparison of dose distribution from MV CBCT image set with/without HU-density conversion method was performed. An advantage of this study compared to other approaches is that HU

  18. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  19. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  20. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  1. Actual survey of dose evaluation method for standardization of radiation therapy techniques. With special reference to display method of radiation doses

    International Nuclear Information System (INIS)

    Kumagai, Kozo; Yoshiura, Takao; Izumi, Takashi; Araki, Fujio; Takada, Takuo; Jingu, Kenichi.

    1994-01-01

    This report presents the results of questionnaire survey for actual conditions of radiation therapy, which was conducted with the aim of establishing the standardization of radiation therapy techniques. Questionnaires were sent to 100 facilities in Japan, and 86 of these answered, consisting of 62 university hospitals, 2 national hospitals, 14 cancer centers, 4 prefectural or municipal hospitals, and 4 other hospitals. In addition to electron beam therapy, the following typical diseases for radiation therapy were selected as standard irradiation models: cancers of the larynx, esophagus, breast, and uterine cervix, and malignant lymphomas. According to these models, questionnaire results are analyzed in terms of the following four items: (1) irradiation procedures, (2) energy used for radiotherapy, (3) the depth for calculating target absorption doses, and (4) points for displaying target absorption doses. (N.K.)

  2. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  3. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. © 2014 Elsevier Inc. All rights reserved.

  4. Assessing and benchmarking multiphoton microscopes for biologists

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  5. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  6. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  7. A method for verification of treatment times for high-dose-rate intraluminal brachytherapy treatment

    Directory of Open Access Journals (Sweden)

    Muhammad Asghar Gadhi

    2016-06-01

    Full Text Available Purpose: This study was aimed to increase the quality of high dose rate (HDR intraluminal brachytherapy treatment. For this purpose, an easy, fast and accurate patient-specific quality assurance (QA tool has been developed. This tool has been implemented at Bahawalpur Institute of Nuclear Medicine and Oncology (BINO, Bahawalpur, Pakistan.Methods: ABACUS 3.1 Treatment planning system (TPS has been used for treatment planning and calculation of total dwell time and then results were compared with the time calculated using the proposed method. This method has been used to verify the total dwell time for different rectum applicators for relevant treatment lengths (2-7 cm and depths (1.5-2.5 cm, different oesophagus applicators of relevant treatment lengths (6-10 cm and depths (0.9 & 1.0 cm, and a bronchus applicator for relevant treatment lengths (4-7.5 cm and depth (0.5 cm.Results: The average percentage differences between treatment time TM with manual calculation and as calculated by the TPS is 0.32% (standard deviation 1.32% for rectum, 0.24% (standard deviation 2.36% for oesophagus and 1.96% (standard deviation 0.55% for bronchus, respectively. These results advocate that the proposed method is valuable for independent verification of patient-specific treatment planning QA.Conclusion: The technique illustrated in the current study is an easy, simple, quick and useful for independent verification of the total dwell time for HDR intraluminal brachytherapy. Our method is able to identify human error-related planning mistakes and to evaluate the quality of treatment planning. It enhances the quality of brachytherapy treatment and reliability of the system.

  8. Characterization of an absorbed dose standard in water through ionometric methods

    International Nuclear Information System (INIS)

    Vargas V, M.X.

    2003-01-01

    In this work the unit of absorbed dose at the Secondary Standard Dosimetry Laboratory (SSDL) of Mexico, is characterized by means of the development of a primary standard of absorbed dose to water, D agua . The main purpose is to diminish the uncertainty in the service of dosimetric calibration of ionization chambers (employed in radiotherapy of extemal beams) that offers this laboratory. This thesis is composed of seven chapters: In Chapter 1 the position and justification of the problem is described, as well as the general and specific objectives. In Chapter 2, a presentation of the main quantities and units used in dosimetry is made, in accordance with the recommendations of the International Commission on Radiation Units and Measurements (ICRU) that establish the necessity to have a coherent system with the international system of units and dosimetric quantities. The concepts of equilibrium and transient equilibrium of charged particles (TCPE) are also presented, which are used later in the quantitative determination of D agua . Finally, since the proposed standard of D agua is of ionometric type, an explanation of the Bragg-Gray and Spencer-Attix cavity theories is made. These theories are the foundation of this type of standards. On the other hand, to guarantee the complete validity of the conditions demanded by these theories it is necessary to introduce correction factors. These factors are determined in Chapters 5 and 6. Since for the calculation of the correction factors Monte Carlo (MC) method is used in an important way, in Chapter 3 the fundamental concepts of this method are presented; in particular the principles of the code MCNP4C [Briesmeister 2000] are detailed, making emphasis on the basis of electron transport and variance reduction techniques used in this thesis. Because a phenomenological approach is carried out in the development of the standard of D agua , in Chapter 4 the characteristics of the Picker C/9 unit, the ionization chamber type

  9. Fast method for in-flight estimation of total dose from protons and electrons using RADE Minstrument on JUICE

    Science.gov (United States)

    Hajdas, Wojtek; Mrigakshi, Alankrita; Xiao, Hualin

    2017-04-01

    The primary concern of the ESA JUICE mission to Jupiter is the harsh particle radiation environment. Ionizing particles introduce radiation damage by total dose effects, displacement damages or single events effects. Therefore, both the total ionizing dose and the displacement damage equivalent fluence must be assessed to alert spacecraft and its payload as well as to quantify radiation levels for the entire mission lifetime. We present a concept and implementations steps for simplified method used to compute in flight a dose rate and total dose caused by protons. We also provide refinement of the method previously developed for electrons. The dose rates values are given for predefined active volumes located behind layers of materials with known thickness. Both methods are based on the electron and proton flux measurements provided by the Electron and Proton Detectors inside the Radiation Hard Electron Monitor (RADEM) located on-board of JUICE. The trade-off between method accuracy and programming limitations for in-flight computations are discussed. More comprehensive and precise dose rate computations based on detailed analysis of all stack detectors will be made during off-line data processing. It will utilize full spectral unfolding from all RADEM detector subsystems.

  10. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  11. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  12. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  13. A passive dosing method to determine fugacity capacities and partitioning properties of leaves.

    Science.gov (United States)

    Bolinius, Damien Johann; MacLeod, Matthew; McLachlan, Michael S; Mayer, Philipp; Jahnke, Annika

    2016-10-12

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (K leaf/PDMS ) for intact leaves, using polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium K leaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.075 (PCB 180) to 0.371 (PCB 3). The K leaf/PDMS data can readily be converted to fugacity capacities of leaves (Z leaf ) and subsequently leaf/water or leaf/air partition ratios (K leaf/water and K leaf/air ) using partitioning data from the literature. Results of our measurements are within the variability observed for plant/air partition ratios (K plant/air ) found in the literature. Log K leaf/air from this study ranged from 5.00 (PCB 3) to 8.30 (PCB 180) compared to log K plant/air of 3.31 (PCB 3) to 8.88 (PCB 180) found in the literature. The method we describe could provide data to characterize the variability in sorptive capacities of leaves that would improve descriptions of uptake of chemicals by leaves in multimedia fate models.

  14. A novel method for interactive multi-objective dose-guided patient positioning

    Science.gov (United States)

    Haehnle, Jonas; Süss, Philipp; Landry, Guillaume; Teichert, Katrin; Hille, Lucas; Hofmaier, Jan; Nowak, Dimitri; Kamp, Florian; Reiner, Michael; Thieke, Christian; Ganswindt, Ute; Belka, Claus; Parodi, Katia; Küfer, Karl-Heinz; Kurz, Christopher

    2017-01-01

    In intensity-modulated radiation therapy (IMRT), 3D in-room imaging data is typically utilized for accurate patient alignment on the basis of anatomical landmarks. In the presence of non-rigid anatomical changes, it is often not obvious which patient position is most suitable. Thus, dose-guided patient alignment is an interesting approach to use available in-room imaging data for up-to-date dose calculation, aimed at finding the position that yields the optimal dose distribution. This contribution presents the first implementation of dose-guided patient alignment as multi-criteria optimization problem. User-defined clinical objectives are employed for setting up a multi-objective problem. Using pre-calculated dose distributions at a limited number of patient shifts and dose interpolation, a continuous space of Pareto-efficient patient shifts becomes accessible. Pareto sliders facilitate interactive browsing of the possible shifts with real-time dose display to the user. Dose interpolation accuracy is validated and the potential of multi-objective dose-guided positioning demonstrated for three head and neck (H&N) and three prostate cancer patients. Dose-guided positioning is compared to replanning for all cases. A delineated replanning CT served as surrogate for in-room imaging data. Dose interpolation accuracy was high. Using a 2 % dose difference criterion, a median pass-rate of 95.7% for H&N and 99.6% for prostate cases was determined in a comparison to exact dose calculations. For all patients, dose-guided positioning allowed to find a clinically preferable dose distribution compared to bony anatomy based alignment. For all H&N cases, mean dose to the spared parotid glands was below 26~\\text{Gy} (up to 27.5~\\text{Gy} with bony alignment) and clinical target volume (CTV) {{V}95 % } above 99.1% (compared to 95.1%). For all prostate patients, CTV {{V}95 % } was above 98.9% (compared to 88.5%) and {{V}50~\\text{Gy}} to the rectum below 50 % (compared to 56

  15. Low-dose-rate total lymphoid irradiation: a new method of rapid immunosuppression

    International Nuclear Information System (INIS)

    Blum, J.E.; de Silva, S.M.; Rachman, D.B.; Order, S.E.

    1988-01-01

    Total Lymphoid Irradiation (TLI) has been successful in inducing immunosuppression in experimental and clinical applications. However, both the experimental and clinical utility of TLI are hampered by the prolonged treatment courses required (23 days in rats and 30-60 days in humans). Low-dose-rate TLI has the potential of reducing overall treatment time while achieving comparable immunosuppression. This study examines the immunosuppressive activity and treatment toxicity of conventional-dose-rate (23 days) vs low-dose-rate (2-7 days) TLI. Seven groups of Lewis rats were given TLI with 60Co. One group was treated at conventional-dose-rates (80-110 cGy/min) and received 3400 cGy in 17 fractions over 23 days. Six groups were treated at low-dose-rate (7 cGy/min) and received total doses of 800, 1200, 1800, 2400, 3000, and 3400 cGy over 2-7 days. Rats treated at conventional-dose-rates over 23 days and at low-dose-rate over 2-7 days tolerated radiation with minimal toxicity. The level of immunosuppression was tested using allogeneic (Brown-Norway) skin graft survival. Control animals retained allogeneic skin grafts for a mean of 14 days (range 8-21 days). Conventional-dose-rate treated animals (3400 cGy in 23 days) kept their grafts 60 days (range 50-66 days) (p less than .001). Low-dose-rate treated rats (800 to 3400 cGy total dose over 2-7 days) also had prolongation of allogeneic graft survival times following TLI with a dose-response curve established. The graft survival time for the 3400 cGy low-dose-rate group (66 days, range 52-78 days) was not significantly different from the 3400 cGy conventional-dose-rate group (p less than 0.10). When the total dose given was equivalent, low-dose-rate TLI demonstrated an advantage of reduced overall treatment time compared to conventional-dose-rate TLI (7 days vs. 23 days) with no increase in toxicity

  16. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    International Nuclear Information System (INIS)

    Xu, Jingyan; Fung, George S K; Tsui, Benjamin M W; Fuld, Matthew K

    2015-01-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D 0 to 25% D 0 . A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D 0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D 0 , the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D 0 was achieved. (paper)

  17. Method for assessing the probability of accumulated doses from an intermittent source using the convolution technique

    International Nuclear Information System (INIS)

    Coleman, J.H.

    1980-10-01

    A technique is discussed for computing the probability distribution of the accumulated dose received by an arbitrary receptor resulting from several single releases from an intermittent source. The probability density of the accumulated dose is the convolution of the probability densities of doses from the intermittent releases. Emissions are not assumed to be constant over the brief release period. The fast fourier transform is used in the calculation of the convolution

  18. Coincidence in the dose estimation in a OEP by different methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; Arceo M, C.; Brena V, M.

    2007-01-01

    The case of an apparent overexposure to radiation according to that indicated for the thermoluminescent dosemeter 81.59 mSv (TLD) of a occupationally exposed hard-working (POE), for that was practiced the study of biological dosimetry. The estimated dose was 0.12 Gy with which was proven the marked dose registration by the TLD dosemeter. It was concluded that both doses are the same ones. (Author)

  19. A new method of real-time skin dose visualization. Clinical evaluation of fluoroscopically guided interventions

    Energy Technology Data Exchange (ETDEWEB)

    Boujan, Fazel [Hopital Hautepierre, Service d' Imagerie 2 - Neuroradiologie, Strasbourg (France); University Hospitals of Strasbourg, Division of Neurointerventional Radiology, Imaging Department, Strasbourg (France); Clauss, Nicolas; Mertz, Luc [University Hospitals of Strasbourg, Division of Radiation Physics and Radiation Safety, Imaging Department, Strasbourg (France); Santos, Emilie; Boon, Sjirk; Schouten, Gerard [Philips Healthcare, Best (Netherlands); Dietemann, Jean-Louis [University Hospitals of Strasbourg, Division of Neuroradiology and Neurointerventional, Imaging Department, Strasbourg (France)

    2014-11-15

    We have conducted a prospective study to clinically evaluate a new radiation dose observing tool that displays patient's peak skin dose (PSD) map in real time. The skin dose map (SDM) prototype quantifies the air kerma based on exposure parameters from the X-ray system. The accuracy of this prototype was evaluated with radiochromic films, which were used as a mean for PSD measurement. The SDM is a reliable tool that provides an accurate PSD estimation and location. SDM also has many advantages over the radiochromic films, such as real-time dose evaluation and easy access to critical operational parameters for physicians and technicians. (orig.)

  20. Radiation Organ Doses Received in a Nationwide Cohort of U.S. Radiologic Technologists: Methods and Findings

    Science.gov (United States)

    Simon, Steven L.; Preston, Dale L.; Linet, Martha S.; Miller, Jeremy S.; Sigurdson, Alice J.; Alexander, Bruce H.; Kwon, Deukwoo; Yoder, R. Craig; Bhatti, Parveen; Little, Mark P.; Rajaraman, Preetha; Melo, Dunstana; Drozdovitch, Vladimir; Weinstock, Robert M.; Doody, Michele M.

    2014-01-01

    In this article, we describe recent methodological enhancements and findings from the dose reconstruction component of a study of health risks among U.S. radiologic technologists. An earlier version of the dosimetry published in 2006 used physical and statistical models, literature-reported exposure measurements for the years before 1960, and archival personnel monitoring badge data from cohort members through 1984. The data and models previously described were used to estimate annual occupational radiation doses for 90,000 radiological technologists, incorporating information about each individual's employment practices based on a baseline survey conducted in the mid-1980s. The dosimetry methods presented here, while using many of the same methods as before, now estimate 2.23 million annual badge doses (personal dose equivalent) for the years 1916–1997 for 110,374 technologists, but with numerous methodological improvements. Every technologist's annual dose is estimated as a probability density function to reflect uncertainty about the true dose. Multiple realizations of the entire cohort distribution were derived to account for shared uncertainties and possible biases in the input data and assumptions used. Major improvements in the dosimetry methods from the earlier version include: A substantial increase in the number of cohort member annual badge dose measurements; Additional information on individual apron usage obtained from surveys conducted in the mid-1990s and mid-2000s; Refined modeling to develop lognormal annual badge dose probability density functions using censored data regression models; Refinements of cohort-based annual badge probability density functions to reflect individual work patterns and practices reported on questionnaires and to more accurately assess minimum detection limits; and Extensive refinements in organ dose conversion coefficients to account for uncertainties in radiographic machine settings for the radiographic techniques

  1. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  2. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  3. Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies

    DEFF Research Database (Denmark)

    Thorsen, Jonathan; Brejnrod, Asker Daniel; Mortensen, Martin Steen

    2016-01-01

    detection power. For beta-diversity-based sample separation, we show that library size normalization has very little effect and that the distance metric is the most important factor in terms of separation power. CONCLUSIONS: Our results, generalizable to datasets from different sequencing platforms......, demonstrate how the choice of method considerably affects analysis outcome. Here, we give recommendations for tools that exhibit low false positive rates, have good retrieval power across effect sizes and case/control proportions, and have low sparsity bias. Result output from some commonly used methods......BACKGROUND: There is an immense scientific interest in the human microbiome and its effects on human physiology, health, and disease. A common approach for examining bacterial communities is high-throughput sequencing of 16S rRNA gene hypervariable regions, aggregating sequence-similar amplicons...

  4. Benchmarking semiempirical and DFT methods for the interaction of thiophene and diethyl sulfide molecules with a Ti(OH)4(H2O) cluster.

    Science.gov (United States)

    Vorontsov, Alexander V; Smirniotis, Panagiotis G

    2017-08-01

    Semiempirical methods pm6 and pm7 as well as density functional theory functionals exchange LSDA, exchange-correlation PW91 and PBE, hybrid B3LYP1 and PBE0 were compared for energy and geometry of thiophene, diethyl sulfide (DES) molecules and their binding to a frozen Ti(OH) 4 (H 2 O) complex having one coordinatively unsaturated Ti 5C site representing small fragment of TiO 2 anatase (001) surface. PBE0/6-31G(d) with DFT-D3 dispersion correction was the best method for description of thiophene and DES molecules geometries as comparison with experimental data demonstrated. Semiempirical methods pm6 and pm7 resulted in only three of four possible binding configurations of thiophene with the Ti(OH) 4 (H 2 O) complex while pm7 described correctly the enthalpy and all configurations of DES binding with the Ti(OH) 4 (H 2 O) complex. SBKJC pseudopotential and LSDA with and without dispersion correction produced flawed results for many configurations. PBE0 and PBE with and without dispersion correction and PW91 with 6-31G(d) basis set systematically produced dependable results for thiophene and DES binding to the Ti(OH) 4 (H 2 O) complex. PBE0-D3/6-31G(d), B3LYP1-D3/6-31G(d), and PBE-D3/6-31G(d) gave best match of binding energy for thiophene while PBE0/6-31G(d) gave best match of DES binding energy as comparison with CCSD(T) energy demonstrated. On the basis of the superior results obtained with PBE0/6-31G(d), it is the recommended method for modeling of adsorption over TiO 2 surfaces. Such a conclusion is in agreement with recent literature.

  5. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  6. The self-consistent charge density functional tight binding method applied to liquid water and the hydrated excess proton: benchmark simulations.

    Science.gov (United States)

    Maupin, C Mark; Aradi, Bálint; Voth, Gregory A

    2010-05-27

    The self-consistent charge density functional tight binding (SCC-DFTB) method is a relatively new approximate electronic structure method that is increasingly used to study biologically relevant systems in aqueous environments. There have been several gas phase cluster calculations that indicate, in some instances, an ability to predict geometries, energies, and vibrational frequencies in reasonable agreement with high level ab initio calculations. However, to date, there has been little validation of the method for bulk water properties, and no validation for the properties of the hydrated excess proton in water. Presented here is a detailed SCC-DFTB analysis of the latter two systems. This work focuses on the ability of the original SCC-DFTB method, and a modified version that includes a hydrogen bonding damping function (HBD-SCC-DFTB), to describe the structural, energetic, and dynamical nature of these aqueous systems. The SCC-DFTB and HBD-SCC-DFTB results are compared to experimental data and Car-Parrinello molecular dynamics (CPMD) simulations using the HCTH/120 gradient-corrected exchange-correlation energy functional. All simulations for these systems contained 128 water molecules, plus one additional proton in the case of the excess proton system, and were carried out in a periodic simulation box with Ewald long-range electrostatics. The liquid water structure for the original SCC-DFTB is shown to poorly reproduce bulk water properties, while the HBD-SCC-DFTB somewhat more closely represents bulk water due to an improved ability to describe hydrogen bonding energies. Both SCC-DFTB methods are found to underestimate the water dimer interaction energy, resulting in a low heat of vaporization and a significantly elevated water oxygen diffusion coefficient as compared to experiment. The addition of an excess hydrated proton to the bulk water resulted in the Zundel cation (H(5)O(2)(+)) stabilized species being the stable form of the charge defect, which

  7. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  8. Retrospective methods to estimate radiation dose at the site of breast cancer development after Hodgkin lymphoma radiotherapy

    Directory of Open Access Journals (Sweden)

    Nicola S. Russell

    2017-12-01

    Full Text Available Background: An increased risk of breast cancer following radiotherapy for Hodgkin lymphoma (HL has now been robustly established. In order to estimate the dose–response relationship more accurately, and to aid clinical decision making, a retrospective estimation of the radiation dose delivered to the site of the subsequent breast cancer is required. Methods: For 174 Dutch and 170 UK female patients with breast cancer following HL treatment, the 3-dimensional position of the breast cancer in the affected breast was determined and transferred onto a CT-based anthropomorphic phantom. Using a radiotherapy treatment planning system the dose distribution on the CT-based phantom was calculated for the 46 different radiation treatment field set-ups used in the study population. The estimated dose at the centre of the breast cancer, and a margin to reflect dose uncertainty were determined on the basis of the location of the tumour and the isodose lines from the treatment planning. We assessed inter-observer variation and for 47 patients we compared the results with a previously applied dosimetry method. Results: The estimated median point dose at the centre of the breast cancer location was 29.75 Gy (IQR 5.8–37.2, or about 75% of the prescribed radiotherapy dose. The median dose uncertainty range was 5.97 Gy. We observed an excellent inter-observer variation (ICC 0.89 (95% CI: 0.74–0.95. The absolute agreement intra-class correlation coefficient (ICC for inter-method variation was 0.59 (95% CI: 0.37–0.75, indicating (nearly good agreement. There were no systematic differences in the dose estimates between observers or methods. Conclusion: Estimates of the dose at the point of a subsequent breast cancer show good correlation between methods, but the retrospective nature of the estimates means that there is always some uncertainty to be accounted for. Keywords: Retrospective dosimetry, Hodgkin lymphoma, Breast carcinogenesis

  9. A method for determining skin dose and the radiation composition in β/γ-mixed fields using 'thick' thermoluminescent detectors

    International Nuclear Information System (INIS)

    Sahre, P.

    1987-01-01

    For determining the skin dose in β/γ-fields of unknown composition normally thin detectors or combined dosimetric systems containing thick detectors have been used up to now. In the paper presented the first attempt to utilize completely the depth dose distribution in a thick, homogeneous TL-detector is proposed and investigated theoretically and experimentally. The method is based on the migration of a temperature front through the detector yielding a special glow curve. The shape of the glow curve and its location depends on the depth dose distribution. The computer aided unfolding of the glow curve using the superposition of standard glow curves is described. The results obtained are the skin dose in a eligible depth (less than the thickness of the detector) and the radiation components at the point of exposition. (author)

  10. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  11. An effective method for smoothing the staggered dose distribution of multi-leaf collimator field edge

    International Nuclear Information System (INIS)

    Hwang, I.-M.; Lin, S.-Y.; Lee, M.-S.; Wang, C.-J.; Chuang, K.-S.; Ding, H.-J.

    2002-01-01

    Purpose: To smooth the staggered dose distribution that occurs in stepped leaves defined by a multi-leaf collimator (MLC). Materials and methods: The MLC Shaper program controlled the stepped leaves, which were shifted in a traveling range, the pattern of shift was from the position of out-bound to in-bound with a one-segment (cross-bound), three-segment, and five-segment shifts. Film was placed at a depth of 1.5 cm and irradiated with the same irradiation dose used for the cerrobend block experiment. Four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge were performed, respectively, in this study. For the field edge defined by the multi-segment technique, the amplitude of the isodose lines for 50% isodose line and both the 80% and 20% isodose lines were measured. The effective penumbra widths with 90-10% and 80-20% distances for different irradiations were determined at four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge. Results: Use of the five-segment technique for multi-leaf collimation at the 60 deg. angle field edge smoothes each isodose line into an effectively straight line, similar to the pattern achieved using a cerrobend block. The separation of these lines is also important. The 80-20% effective penumbra width with five-segment techniques (8.23 mm) at 60 deg. angle relative to the jaw edge is little wider (1.9 times) than the penumbra of cerrobend block field edge (4.23 mm). We also found that the 90-10% effective penumbra width with five-segment techniques (12.68 mm) at 60 deg. angle relative to the jaw edge is little wider (1.28 times) than the penumbra of cerrobend block field edge (9.89 mm). Conclusion: The multi-segment technique is effective in smoothing the MLC staggered field edge. The effective penumbra width with more segment techniques at larger degree angles relative to the field edge is little wider than the penumbra for a

  12. A new method to explore the spectral impact of the piriform fossae on the singing voice: benchmarking using MRI-based 3D-printed vocal tracts.

    Directory of Open Access Journals (Sweden)

    Bertrand Delvaux

    Full Text Available The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics is introduced. The findings support results in the literature: the piriform fossae create a spectral trough in the region 4-5 kHz and act as formants repellents. Moreover, this study extends those results by demonstrating numerically and perceptually the impact of having large piriform fossae on the sung output.

  13. Medical reference dosimetry using EPR measurements of alanine: Development of an improved method for clinical dose levels

    International Nuclear Information System (INIS)

    Helt-Hansen, Jakob; Andersen, Claus Erik; Rosendal, Flemming; Kofoed, Inger Matilde

    2009-01-01

    Electron spin resonance (EPR) is used to determine the absorbed dose of alanine dosimeters exposed to clinical photon beams in a solid-water phantom. Alanine is potentially suitable for medical reference dosimetry, because of its near water equivalence over a wide energy spectrum, low signal fading, non-destructive measurement and small dosimeter size. Material and Methods. A Bruker EMX-micro EPR spectrometer with a rectangular cavity and a measurement time of two minutes per dosimeter was used for reading of irradiated alanine dosimeters. Under these conditions a new algorithm based on scaling of known spectra was developed to extract the alanine signal. Results. The dose accuracy, including calibration uncertainty, is less than 2% (k=1) above 4 Gy (n=4). The measurement uncertainty is fairly constant in absolute terms (∼30 mGy) and the relative uncertainty therefore rises for dose measurements below 4 Gy. Typical reproducibility is <1% (k=1) above 10 Gy and <2% between 4 and 10 Gy. Below 4 Gy the uncertainty is higher. A depth dose curve measurement was performed in a solid-water phantom irradiated to a dose of 20 Gy at the maximum dose point (dmax) in 6 and 18 MV photon beams. The typical difference between the dose measured with alanine in solid water and the dose measured with an ion chamber in a water tank was about 1%. A difference of 2% between 6 and 18 MV was found, possibly due to non-water equivalence of the applied phantom. Discussion. Compared to previously published methods the proposed algorithm can be applied without normalisation of phase shifts caused by changes in the g-value of the cavity. The study shows that alanine dosimetry is a suitable candidate for medical reference dosimetry especially for quality control applications

  14. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  15. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  16. Calculation of primary and secondary dose in proton therapy of brain tumors using Monte Carlo method

    International Nuclear Information System (INIS)

    Moghbel Esfahani, F.; Alamatsaz, M.; Karimian, A.

    2012-01-01

    High-energy beams of protons offer significant advantages for the treatment of deep-seated local tumors. Their physical depth-dose distribution in tissue is characterized by a small entrance dose and a distinct maximum - Bragg peak - near the end of range with a sharp falloff at the distal edge. Therefore, research must be done to investigate the possible negative and positive effects of using proton therapy as a treatment modality. In proton therapy, protons do account for the vast majority of dose. However, when protons travel through matter, secondary particles are created by the interactions of protons and matter en route to and within the patient. It is believed that secondary dose can lead to secondary cancer, especially in pediatric cases. Therefore, the focus of this work is determining both primary and secondary dose. Dose calculations were performed by MCNPX in tumoral and healthy parts of brain. The brain tumor has a 10 mm diameter and is located 16 cm under the skin surface. The brain was simulated by a cylindrical water phantom with the dimensions of 19 x 19cm 2 (length x diameter), with 0.5 cm thickness of plexiglass (C 4 H 6 O 2 ). Then beam characteristics were investigated to ensure the accuracy of the model. Simulations were initially validated with against packages such as SRIM/TRIM. Dose calculations were performed using different configurations to evaluate depth-dose profiles and dose 2D distributions.The results of the simulation show that the best proton energy interval, to cover completely the brain tumor, is from 152 to 154 MeV. (authors)

  17. SU-F-J-39: Dose Reduction Strategy Using Attenuation-Based Tube Current Modulation Method in CBCT for IGRT

    Energy Technology Data Exchange (ETDEWEB)

    Son, K; Lee, H; Kim, C; Cho, S [KAIST, Daejeon (Korea, Republic of); Kim, J [Yonsei Cancer Center, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: To reduce radiation dose to the patients, tube current modulation (TCM) method has been actively used in diagnostic CT systems. However, TCM method has not yet been applied to a kV-CBCT system on a LINAC machine. The purpose of this study is to investigate whether the use of TCM method is desirable in kV-CBCT system for IGRT. We have developed an attenuation-based tube current modulation (a-TCM) method using the prior knowledge of treatment CT image of a patient. Methods: Patients go through a diagnostic CT scan for RT planning; therefore, using this prior information of CT images, one can estimate the total attenuation of an x-ray through the patient body in a CBCT setting for radiation therapy. We performed a numerical study incorporating major factors into account such as polychromatic x-ray, scatter, noise, and bow-tie filter to demonstrate that a-TCM method can produce equivalent quality of images at reduced imaging radiation doses. Using the CT projector program, 680 projection images of the pediatric XCAT phantom were obtained both in conventional scanning condition, i.e., without modulating the tube current, and in the proposed a-TCM scanning condition. FDK reconstruction algorithm was used for image reconstruction, and the organ dose due to imaging radiation has been calculated in both cases and compared using GATE/Geant4 simulation toolkit. Results: Reconstructed CT images in the a-TCM method showed similar SSIM values and noise properties to the reference images acquired by the conventional CBCT. In addition, reduction of organ doses ranged from 12% to 27%. Conclusion: We have successfully demonstrated the feasibility and dosimetric merit of the a-TCM method for kV-CBCT, and envision that it can be a useful option of CBCT scanning that provides patient dose reduction without degrading image quality.

  18. Benchmarking Post-Hartree–Fock Methods To Describe the Nonlinear Optical Properties of Polymethines: An Investigation of the Accuracy of Algebraic Diagrammatic Construction (ADC) Approaches

    KAUST Repository

    Knippenberg, Stefan

    2016-10-07

    Third-order nonlinear optical (NLO) properties of polymethine dyes have been widely studied for applications such as all-optical switching. However, the limited accuracy of the current computational methodologies has prevented a comprehensive understanding of the nature of the lowest excited states and their influence on the molecular optical and NLO properties. Here, attention is paid to the lowest excited-state energies and their energetic ratio, as these characteristics impact the figure-of-merit for all-optical switching. For a series of model polymethines, we compare several algebraic diagrammatic construction (ADC) schemes for the polarization propagator with approximate second-order coupled cluster (CC2) theory, the widely used INDO/MRDCI approach and the symmetry-adapted cluster configuration interaction (SAC-CI) algorithm incorporating singles and doubles linked excitation operators (SAC-CI SD-R). We focus in particular on the ground-to-excited state transition dipole moments and the corresponding state dipole moments, since these quantities are found to be of utmost importance for an effective description of the third-order polarizability γ and two-photon absorption spectra. A sum-overstates expression has been used, which is found to quickly converge. While ADC(3/2) has been found to be the most appropriate method to calculate these properties, CC2 performs poorly.

  19. Effective dose in individuals from exposure the patients treated with 131I using Monte Carlo method

    International Nuclear Information System (INIS)

    Carvalho Junior, Alberico B. de; Silva, Ademir X.

    2007-01-01

    In this work, using the Visual Monte Carlo code and the voxel phantom FAX, elaborated similar scenes of irradiation to the treatments used in the nuclear medicine, with the intention of estimate the effective dose in individuals from exposure the patients treated with 131 I. We considered often specific situations, such as doses to others while sleeping, using public or private transportation, or being in a cinema for a few hours. In the possible situations that has been considered, the value of the effective dose did not overcome 0.05 mSv, demonstrating that, for the considered parameters the patient could be release without receiving instructions from radioprotection. (author)

  20. New method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. Clinical and hormonal studies

    Energy Technology Data Exchange (ETDEWEB)

    Gronroos, M.; Turunen, T.; Raekallio, J.; Ruotsalinen, P.; Salmi, T. (Turku Univ. (Finland). Dept. of Obstetrics and Gynecology)

    1982-08-01

    The authors present a new method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. The problem with this method has been how to inactivate the endometrium while maintaining the physiological function of the ovaries. In 5/29 young patients regular or irregular bleedings occurred after an endometrial dose of 11+-1 Gy. These subjects were given a repeat low dose intrauterine irradiation. Thereafter no bleedings were found in four out of five patients. Two to 9 years after the repeat irradiation the plasma levels of E/sub 1/, E/sub 2/, FSH and LH corresponded closely to those of healthy women in reproductive age in three out of five patients; some high plasma P levels indicated ovulation. In two patients the E/sub 1/, E/sub 2/, and P values were more likely postmenopausal but, on the other hand, FSH and LH values reproductive ones. 19 refs.

  1. A new method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. Clinical and hormonal studies

    International Nuclear Information System (INIS)

    Gronroos, M.; Turunen, T.; Raekallio, J.; Ruotsalinen, P.; Salmi, T.

    1982-01-01

    The authors present a new method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. The problem with this method has been how to inactivate the endometrium while maintaining the physiological function of the ovaries. In 5/29 young patients regular or irregular bleedings occurred after an endometrial dose of 11+-1 Gy. These subjects were given a repeat low dose intrauterine irradiation. Thereafter no bleedings were found in four out of five patients. Two to 9 years after the repeat irradiation the plasma levels of E 1 , E 2 , FSH and LH corresponded closely to those of healthy women in reproductive age in three out of five patients; some high plasma P levels indicated ovulation. In two patients the E 1 , E 2 , and P values were more likely postmenopausal but, on the other hand, FSH and LH values reproductive ones. (author)

  2. Non-monotonic dose-response relationships and endocrine disruptors: a qualitative method of assessment

    OpenAIRE

    Lagarde, Fabien; Beausoleil, Claire; Belcher, Scott M; Belzunces, Luc P; Emond, Claude; Guerbet, Michel; Rousselle, Christophe

    2015-01-01

    International audience; Experimental studies investigating the effects of endocrine disruptors frequently identify potential unconventional dose-response relationships called non-monotonic dose-response (NMDR) relationships. Standardized approaches for investigating NMDR relationships in a risk assessment context are missing. The aim of this work was to develop criteria for assessing the strength of NMDR relationships. A literature search was conducted to identify published studies that repor...

  3. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  4. Empirical methods to calculate an erythropoiesis-stimulating agent dose conversion ratio in nondialyzed patients with chronic kidney disease.

    Science.gov (United States)

    Horowitz, Jeffrey; Agarwal, Anil; Huang, Fannie; Gitlin, Matthew; Gandra, Shravanthi R; Cangialose, Charles B

    2009-01-01

    Epoetin alfa and darbepoetin alfa are erythropoiesis-stimulating agents (ESAs) indicated for the treatment of anemia in chronic renal failure, including patients on dialysis and patients not on dialysis. Clinical experience demonstrates that the dose conversion ratio (DCR) between epoetin alfa and darbepoetin alfa is nonproportional across the dosing spectrum. However, previous calculations of the dose relationship between epoetin alfa and darbepoetin alfa, described in previous work as the "dose ratio" (DR), (a) used cross-sectional designs (i.e., compared mean doses for patient groups using each ESA) and were therefore vulnerable to confounding or (b) did not adjust for the nonproportional dose relationship. DRs reported in the literature range from 217:1 to 287:1 epoetin alfa (Units [U]):darbepoetin alfa (micrograms [micrograms]). Payers may need a single DCR that accounts for the nonproportional dose relationship to evaluate the economic implications of converting a nondialyzed patient population with chronic kidney disease (CKD) from epoetin alfa to darbepoetin alfa. To estimate a single mean maintenance DCR between epoetin alfa and darbepoetin alfa in subjects with CKD not receiving dialysis, using methods that take into account the nonproportional dose relationship between the 2 ESAs. This was a post-hoc analysis of a subset of patients enrolled in an unpublished, open-label, single arm phase 3 clinical trial (ClinTrial. gov identifier NCT00093977) that was completed in 2006. Although the clinical trial enrolled both dialyzed and nondialyzed patients, the present study used a patient subset comprising nondialyzed patients with CKD previously receiving weekly or every-other-week (Q2W) epoetin alfa who were switched to Q2W darbepoetin alfa to maintain hemoglobin (Hb) levels between 11.0 and 13.0 grams per deciliter. A population mean DCR was estimated using 2 methods: (a) a regression-based method in which the log-transformed (natural logarithm) mean weekly

  5. Mathematical modelling methods for assessing radiation doses received by populations in the vicinity of a nuclear site from atmospheric discharges

    International Nuclear Information System (INIS)

    Tarrant, C.E.

    1991-01-01

    This paper looks at some relevant work being done by the Ministry's Food Safety (Radiation) Unit in connection with obligations under the Radioactive Substances Act, 1960. MAFF, in conjunction with Her Majesty's Inspectorate of Pollution (HMIP) issues certificates of authorisation to nuclear sites to control operations and limit the environmental impact of discharges. Requirements in the UK now call for the current annual discharges from any one nuclear site to be limited so that no individual member of the public receives more than 0.5 mSv Committed Effective Dose Equivalent (CEDE) from all pathways. Doses to the general public should also be limited by the ALARA principle. From a knowledge of atmospheric concentration and deposition of the radionuclides, radiation doses via the foodchain, inhalation and immersion pathways may be calculated. Based upon this, the Authorising Departments are currently engaged in setting numerical limits to atmospheric discharges from sites in England and Wales. This paper assesses the likely radiation dose distribution via different pathways to critical groups from discharges. Results are presented here of some recent investigations; in particular, the predicted doses received via food product and via radionuclide are examined to obtain possible useful insights. Methods for identifying the critical group and its location are fully explained as well as the results and methodology for the additivity of dose. (author)

  6. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care were...... assessed: Compliance with current guidelines on initiation of 1) combination antiretroviral therapy (cART), 2) chemoprophylaxis, 3) frequency of laboratory monitoring, and 4) virological response to cART (proportion of patients with HIV-RNA 90% of time on cART). RESULTS: 7097 Euro...... to North, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for East and Argentina (adjusted OR 0.16[95%CI 0.11-0.23, p HIV health care utilization...

  7. The Type of Container and Filling Method Have Consequences on Semen Quality in Swine AI Doses

    Directory of Open Access Journals (Sweden)

    Iulian Ibanescu

    2016-05-01

    Full Text Available The automatic filling of semen doses for artificial insemination in swine shows economic advantages over the old-style, manual filling. However, no data could be found regarding the impact, if any, of this packing method on semen quality. This study aimed to compare two types of containers for boar semen, namely the automatically-filled tube and the manually-filled bottle, in terms of preserving the quality of boar semen. Five ejaculates from five different boars were diluted with the same extender and then divided in two aliquots. First aliquot was loaded in tubes filled by an automatic machine while the second was loaded manually in special plastic bottles. The semen was stored in liquid state at 17°C, regardless of the type of container and examined daily, for five days of storage by means of a computer-assisted sperm analyzer. Both types of containers maintained the semen within acceptable values, but after five days of storage significant differences (p<0.05 between the container types were observed in terms of all selected kinetic parameters. The tube showed better values for sperm motility and velocity, while the bottle showed superior values for straightness and linearity of sperm movement. The automatically-filled tubes offered better sperm motility in every day of the study. Given the fact that sperm motility is still the main criterion in assessing semen quality in semen production centers, the main conclusion of this study is that the automatic loading in tubes is superior and recommended over the old-style manual loading in bottles.

  8. Method for measuring dose-equivalent in a neutron flux with an unknown energy spectra and means for carrying out that method

    Science.gov (United States)

    Distenfeld, Carl H.

    1978-01-01

    A method for measuring the dose-equivalent for exposure to an unknown and/or time varing neutron flux which comprises simultaneously exposing a plurality of neutron detecting elements of different types to a neutron flux and combining the measured responses of the various detecting elements by means of a function, whose value is an approximate measure of the dose-equivalent, which is substantially independent of the energy spectra of the flux. Also, a personnel neutron dosimeter, which is useful in carrying out the above method, comprising a plurality of various neutron detecting elements in a single housing suitable for personnel to wear while working in a radiation area.

  9. Dose estimation in the crystalline lens of industrial radiography personnel using Monte Carlo Method

    International Nuclear Information System (INIS)

    Lima, Alexandre Roza de

    2014-01-01

    The International Commission on Radiological Protection, ICRP, in its publication 103, reviewed recent epidemiological evidence and indicated that, for the eye lens, the absorbed dose threshold for induction of late detriment is around 0.5 Gy. On this basis, on April 21, 2011, the ICRP recommended changes to the occupational dose limit in planned exposure situations, reducing the eye lens equivalent dose limit from 150 mSv to 20 mSv per year, on average, during the period of 5 years, with exposure not to exceed 50 mSv in a single year. This paper presents the dose estimation to eye lens, H p (10), effective dose and doses to important organs in the body, received by industrial gamma radiography workers, during planned or accidental exposure situations. The computer program Visual Monte Carlo was used and two relevant scenarios were postulated. The first is a planned exposure situation scenario where the operator is directly exposed to radiation during the operation. 12 radiographic exposures per day for 250 days per year, which leads to an exposure of 36,000 seconds or 10 hours per year were considered. The simulation was carried out using the following parameters: a 192 Ir source with 1.0 TBq of activity, the source/operator distance varying from 5 m to 10 m at three different heights of 0.2 m, 1.0 m and 2.0 m. The eyes lens doses were estimated as being between 16.9 mSv/year and 66.9 mSv/year and for H p (10) the doses were between 17.7 mSv/year and 74.2 mSv/year. For the accidental exposure situation scenario, the same radionuclide and activity were used, but in this case the doses were calculated with and without a collimator. The heights above ground considered were 1.0 m, 1.5 m e 2.0 m, the source/operator distance was 40 cm and, the exposure time 74 seconds. The eyes lens doses, for 1.5 m, were 12.3 mGy and 0.28 mGy without and with a collimator, respectively. Three conclusions resulted from this work. The first was that the estimated doses show that the new

  10. Benchmark Results for Few-Body Hypernuclei

    Science.gov (United States)

    Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.

    2017-05-01

    The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.

  11. Feasibility of MR-only proton dose calculations for prostate cancer radiotherapy using a commercial pseudo-CT generation method

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Landry, Guillaume; Belka, Claus; Parodi, Katia; Seevinck, Peter R.; Raaymakers, Bas W.; Kurz, Christopher

    2017-12-01

    A magnetic resonance (MR)-only radiotherapy workflow can reduce cost, radiation exposure and uncertainties introduced by CT-MRI registration. A crucial prerequisite is generating the so called pseudo-CT (pCT) images for accurate dose calculation and planning. Many pCT generation methods have been proposed in the scope of photon radiotherapy. This work aims at verifying for the first time whether a commercially available photon-oriented pCT generation method can be employed for accurate intensity-modulated proton therapy (IMPT) dose calculation. A retrospective study was conducted on ten prostate cancer patients. For pCT generation from MR images, a commercial solution for creating bulk-assigned pCTs, called MR for Attenuation Correction (MRCAT), was employed. The assigned pseudo-Hounsfield Unit (HU) values were adapted to yield an increased agreement to the reference CT in terms of proton range. Internal air cavities were copied from the CT to minimise inter-scan differences. CT- and MRCAT-based dose calculations for opposing beam IMPT plans were compared by gamma analysis and evaluation of clinically relevant target and organ at risk dose volume histogram (DVH) parameters. The proton range in beam’s eye view (BEV) was compared using single field uniform dose (SFUD) plans. On average, a (2%, 2 mm) gamma pass rate of 98.4% was obtained using a 10% dose threshold after adaptation of the pseudo-HU values. Mean differences between CT- and MRCAT-based dose in the DVH parameters were below 1 Gy (radiotherapy, is feasible following adaptation of the assigned pseudo-HU values.

  12. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  13. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  14. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  15. Evaluation of equivalent doses in 18F PET/CT using the Monte Carlo method with MCNPX code

    International Nuclear Information System (INIS)

    Belinato, Walmir; Santos, William Souza; Perini, Ana Paula; Neves, Lucio Pereira; Souza, Divanizia N.

    2017-01-01

    The present work used the Monte Carlo method (MMC), specifically the Monte Carlo NParticle - MCNPX, to simulate the interaction of radiation involving photons and particles, such as positrons and electrons, with virtual adult anthropomorphic simulators on PET / CT scans and to determine absorbed and equivalent doses in adult male and female patients

  16. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  17. A method to dynamically balance intensity modulated radiotherapy dose between organs-at-risk

    International Nuclear Information System (INIS)

    Das, Shiva K.

    2009-01-01

    The IMRT treatment planning process typically follows a path that is based on the manner in which the planner interactively adjusts the target and organ-at-risk (OAR) constraints and priorities. The time-intensive nature of this process restricts the planner from fully understanding the dose trade-off between structures, making it unlikely that the resulting plan fully exploits the extent to which dose can be redistributed between anatomical structures. Multiobjective Pareto optimization has been used in the past to enable the planner to more thoroughly explore alternatives in dose trade-off by combining pre-generated Pareto optimal solutions in real time, thereby potentially tailoring a plan more exactly to requirements. However, generating the Pareto optimal solutions can be nonintuitive and computationally time intensive. The author presents an intuitive and fast non-Pareto approach for generating optimization sequences (prior to planning), which can then be rapidly combined by the planner in real time to yield a satisfactory plan. Each optimization sequence incrementally reduces dose to one OAR at a time, starting from the optimization solution where dose to all OARs are reduced with equal priority, until user-specified target coverage limits are violated. The sequences are computationally efficient to generate, since the optimization at each position along a sequence is initiated from the end result of the previous position in the sequence. The pre-generated optimization sequences require no user interaction. In real time, a planner can more or less instantaneously visualize a treatment plan by combining the dose distributions corresponding to user-selected positions along each of the optimization sequences (target coverage is intrinsically maintained in the combination). Interactively varying the selected positions along each of the sequences enables the planner to rapidly understand the nature of dose trade-off between structures and, thereby, arrive at a

  18. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  19. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking